00:00:00.000 Started by upstream project "autotest-per-patch" build number 131134 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.085 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.113 Using shallow fetch with depth 1 00:00:00.113 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.113 > git --version # timeout=10 00:00:00.137 > git --version # 'git version 2.39.2' 00:00:00.137 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.159 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.159 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.282 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.294 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.307 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:05.307 > git config core.sparsecheckout # timeout=10 00:00:05.319 > git read-tree -mu HEAD # timeout=10 00:00:05.336 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:05.355 Commit message: "scripts/kid: add issue 3551" 00:00:05.355 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:05.484 [Pipeline] Start of Pipeline 00:00:05.498 [Pipeline] library 00:00:05.499 Loading library shm_lib@master 00:00:05.500 Library shm_lib@master is cached. Copying from home. 00:00:05.512 [Pipeline] node 00:00:05.526 Running on WFP49 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.527 [Pipeline] { 00:00:05.537 [Pipeline] catchError 00:00:05.538 [Pipeline] { 00:00:05.552 [Pipeline] wrap 00:00:05.560 [Pipeline] { 00:00:05.568 [Pipeline] stage 00:00:05.570 [Pipeline] { (Prologue) 00:00:05.761 [Pipeline] sh 00:00:06.048 + logger -p user.info -t JENKINS-CI 00:00:06.065 [Pipeline] echo 00:00:06.067 Node: WFP49 00:00:06.072 [Pipeline] sh 00:00:06.370 [Pipeline] setCustomBuildProperty 00:00:06.378 [Pipeline] echo 00:00:06.379 Cleanup processes 00:00:06.383 [Pipeline] sh 00:00:06.667 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.667 1990102 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.682 [Pipeline] sh 00:00:06.971 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.971 ++ grep -v 'sudo pgrep' 00:00:06.971 ++ awk '{print $1}' 00:00:06.971 + sudo kill -9 00:00:06.971 + true 00:00:06.983 [Pipeline] cleanWs 00:00:06.992 [WS-CLEANUP] Deleting project workspace... 00:00:06.992 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.997 [WS-CLEANUP] done 00:00:07.001 [Pipeline] setCustomBuildProperty 00:00:07.013 [Pipeline] sh 00:00:07.298 + sudo git config --global --replace-all safe.directory '*' 00:00:07.381 [Pipeline] httpRequest 00:00:08.291 [Pipeline] echo 00:00:08.292 Sorcerer 10.211.164.101 is alive 00:00:08.300 [Pipeline] retry 00:00:08.302 [Pipeline] { 00:00:08.311 [Pipeline] httpRequest 00:00:08.315 HttpMethod: GET 00:00:08.316 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:08.316 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:08.324 Response Code: HTTP/1.1 200 OK 00:00:08.325 Success: Status code 200 is in the accepted range: 200,404 00:00:08.325 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:22.654 [Pipeline] } 00:00:22.668 [Pipeline] // retry 00:00:22.673 [Pipeline] sh 00:00:22.958 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:22.974 [Pipeline] httpRequest 00:00:23.374 [Pipeline] echo 00:00:23.376 Sorcerer 10.211.164.101 is alive 00:00:23.387 [Pipeline] retry 00:00:23.389 [Pipeline] { 00:00:23.403 [Pipeline] httpRequest 00:00:23.408 HttpMethod: GET 00:00:23.408 URL: http://10.211.164.101/packages/spdk_f1e77deadc1d90063c1d94ed8594110a44371a39.tar.gz 00:00:23.409 Sending request to url: http://10.211.164.101/packages/spdk_f1e77deadc1d90063c1d94ed8594110a44371a39.tar.gz 00:00:23.415 Response Code: HTTP/1.1 200 OK 00:00:23.415 Success: Status code 200 is in the accepted range: 200,404 00:00:23.415 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_f1e77deadc1d90063c1d94ed8594110a44371a39.tar.gz 00:06:04.658 [Pipeline] } 00:06:04.676 [Pipeline] // retry 00:06:04.685 [Pipeline] sh 00:06:04.976 + tar --no-same-owner -xf spdk_f1e77deadc1d90063c1d94ed8594110a44371a39.tar.gz 00:06:07.536 [Pipeline] sh 00:06:07.818 + git -C spdk log --oneline -n5 00:06:07.818 f1e77dead bdev/nvme: interrupt mode for PCIe transport 00:06:07.818 2a72c3069 nvme/poll_group: create and manage fd_group for nvme poll group 00:06:07.818 699078603 thread: Extended options for spdk_interrupt_register 00:06:07.818 7868e657c util: fix total fds to wait for 00:06:07.818 6f7c1eab6 util: handle events for vfio fd type 00:06:07.831 [Pipeline] } 00:06:07.843 [Pipeline] // stage 00:06:07.852 [Pipeline] stage 00:06:07.854 [Pipeline] { (Prepare) 00:06:07.871 [Pipeline] writeFile 00:06:07.886 [Pipeline] sh 00:06:08.174 + logger -p user.info -t JENKINS-CI 00:06:08.187 [Pipeline] sh 00:06:08.474 + logger -p user.info -t JENKINS-CI 00:06:08.486 [Pipeline] sh 00:06:08.847 + cat autorun-spdk.conf 00:06:08.847 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:08.847 SPDK_TEST_FUZZER_SHORT=1 00:06:08.847 SPDK_TEST_FUZZER=1 00:06:08.847 SPDK_TEST_SETUP=1 00:06:08.847 SPDK_RUN_UBSAN=1 00:06:08.855 RUN_NIGHTLY=0 00:06:08.859 [Pipeline] readFile 00:06:08.878 [Pipeline] withEnv 00:06:08.879 [Pipeline] { 00:06:08.891 [Pipeline] sh 00:06:09.177 + set -ex 00:06:09.177 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:06:09.177 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:06:09.177 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:09.177 ++ SPDK_TEST_FUZZER_SHORT=1 00:06:09.177 ++ SPDK_TEST_FUZZER=1 00:06:09.177 ++ SPDK_TEST_SETUP=1 00:06:09.177 ++ SPDK_RUN_UBSAN=1 00:06:09.177 ++ RUN_NIGHTLY=0 00:06:09.177 + case $SPDK_TEST_NVMF_NICS in 00:06:09.177 + DRIVERS= 00:06:09.177 + [[ -n '' ]] 00:06:09.177 + exit 0 00:06:09.187 [Pipeline] } 00:06:09.206 [Pipeline] // withEnv 00:06:09.211 [Pipeline] } 00:06:09.227 [Pipeline] // stage 00:06:09.239 [Pipeline] catchError 00:06:09.241 [Pipeline] { 00:06:09.275 [Pipeline] timeout 00:06:09.275 Timeout set to expire in 30 min 00:06:09.277 [Pipeline] { 00:06:09.292 [Pipeline] stage 00:06:09.294 [Pipeline] { (Tests) 00:06:09.308 [Pipeline] sh 00:06:09.600 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:06:09.600 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:06:09.600 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:06:09.600 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:06:09.600 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:09.600 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:06:09.600 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:06:09.600 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:06:09.600 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:06:09.600 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:06:09.600 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:06:09.601 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:06:09.601 + source /etc/os-release 00:06:09.601 ++ NAME='Fedora Linux' 00:06:09.601 ++ VERSION='39 (Cloud Edition)' 00:06:09.601 ++ ID=fedora 00:06:09.601 ++ VERSION_ID=39 00:06:09.601 ++ VERSION_CODENAME= 00:06:09.601 ++ PLATFORM_ID=platform:f39 00:06:09.601 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:09.601 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:09.601 ++ LOGO=fedora-logo-icon 00:06:09.601 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:09.601 ++ HOME_URL=https://fedoraproject.org/ 00:06:09.601 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:09.601 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:09.601 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:09.601 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:09.601 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:09.601 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:09.601 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:09.601 ++ SUPPORT_END=2024-11-12 00:06:09.601 ++ VARIANT='Cloud Edition' 00:06:09.601 ++ VARIANT_ID=cloud 00:06:09.601 + uname -a 00:06:09.601 Linux spdk-wfp-49 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:09.601 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:12.898 Hugepages 00:06:12.898 node hugesize free / total 00:06:12.898 node0 1048576kB 0 / 0 00:06:12.898 node0 2048kB 0 / 0 00:06:12.898 node1 1048576kB 0 / 0 00:06:12.898 node1 2048kB 0 / 0 00:06:12.898 00:06:12.898 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:12.898 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:12.898 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:12.898 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:12.898 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:12.898 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:12.898 + rm -f /tmp/spdk-ld-path 00:06:12.898 + source autorun-spdk.conf 00:06:12.898 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:12.898 ++ SPDK_TEST_FUZZER_SHORT=1 00:06:12.898 ++ SPDK_TEST_FUZZER=1 00:06:12.898 ++ SPDK_TEST_SETUP=1 00:06:12.898 ++ SPDK_RUN_UBSAN=1 00:06:12.898 ++ RUN_NIGHTLY=0 00:06:12.898 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:12.898 + [[ -n '' ]] 00:06:12.898 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:12.898 + for M in /var/spdk/build-*-manifest.txt 00:06:12.898 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:12.898 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:06:12.898 + for M in /var/spdk/build-*-manifest.txt 00:06:12.898 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:12.898 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:06:12.898 + for M in /var/spdk/build-*-manifest.txt 00:06:12.898 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:12.898 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:06:12.898 ++ uname 00:06:12.898 + [[ Linux == \L\i\n\u\x ]] 00:06:12.899 + sudo dmesg -T 00:06:12.899 + sudo dmesg --clear 00:06:12.899 + dmesg_pid=1992014 00:06:12.899 + [[ Fedora Linux == FreeBSD ]] 00:06:12.899 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:12.899 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:12.899 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:12.899 + [[ -x /usr/src/fio-static/fio ]] 00:06:12.899 + export FIO_BIN=/usr/src/fio-static/fio 00:06:12.899 + FIO_BIN=/usr/src/fio-static/fio 00:06:12.899 + sudo dmesg -Tw 00:06:12.899 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:12.899 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:12.899 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:12.899 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:12.899 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:12.899 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:12.899 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:12.899 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:12.899 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:06:12.899 Test configuration: 00:06:12.899 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:12.899 SPDK_TEST_FUZZER_SHORT=1 00:06:12.899 SPDK_TEST_FUZZER=1 00:06:12.899 SPDK_TEST_SETUP=1 00:06:12.899 SPDK_RUN_UBSAN=1 00:06:12.899 RUN_NIGHTLY=0 17:23:09 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:12.899 17:23:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:12.899 17:23:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:12.899 17:23:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:12.899 17:23:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.899 17:23:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.899 17:23:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.899 17:23:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.899 17:23:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.899 17:23:09 -- paths/export.sh@5 -- $ export PATH 00:06:12.899 17:23:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.899 17:23:09 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:06:12.899 17:23:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:06:12.899 17:23:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728919389.XXXXXX 00:06:12.899 17:23:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728919389.Dvmoh3 00:06:12.899 17:23:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:06:12.899 17:23:09 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:06:12.899 17:23:09 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:06:12.899 17:23:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:12.899 17:23:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:12.899 17:23:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:06:12.899 17:23:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:12.899 17:23:09 -- common/autotest_common.sh@10 -- $ set +x 00:06:12.899 17:23:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:12.899 17:23:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:06:12.899 17:23:09 -- pm/common@17 -- $ local monitor 00:06:12.899 17:23:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.899 17:23:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.899 17:23:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.899 17:23:09 -- pm/common@21 -- $ date +%s 00:06:12.899 17:23:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:12.899 17:23:09 -- pm/common@21 -- $ date +%s 00:06:12.899 17:23:09 -- pm/common@25 -- $ sleep 1 00:06:12.899 17:23:09 -- pm/common@21 -- $ date +%s 00:06:12.899 17:23:09 -- pm/common@21 -- $ date +%s 00:06:12.899 17:23:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919389 00:06:12.899 17:23:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919389 00:06:12.899 17:23:09 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919389 00:06:12.899 17:23:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919389 00:06:12.899 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919389_collect-vmstat.pm.log 00:06:12.899 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919389_collect-cpu-load.pm.log 00:06:12.899 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919389_collect-cpu-temp.pm.log 00:06:12.899 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919389_collect-bmc-pm.bmc.pm.log 00:06:13.842 17:23:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:06:13.842 17:23:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:13.842 17:23:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:13.842 17:23:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:13.842 17:23:10 -- spdk/autobuild.sh@16 -- $ date -u 00:06:13.842 Mon Oct 14 03:23:10 PM UTC 2024 00:06:13.842 17:23:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:13.842 v25.01-pre-77-gf1e77dead 00:06:13.842 17:23:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:13.842 17:23:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:13.842 17:23:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:13.842 17:23:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:13.842 17:23:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:13.842 17:23:10 -- common/autotest_common.sh@10 -- $ set +x 00:06:13.842 ************************************ 00:06:13.842 START TEST ubsan 00:06:13.842 ************************************ 00:06:13.842 17:23:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:06:13.842 using ubsan 00:06:13.842 00:06:13.842 real 0m0.001s 00:06:13.842 user 0m0.001s 00:06:13.842 sys 0m0.000s 00:06:13.842 17:23:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:13.842 17:23:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:13.842 ************************************ 00:06:13.842 END TEST ubsan 00:06:13.842 ************************************ 00:06:13.842 17:23:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:13.842 17:23:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:13.842 17:23:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:13.842 17:23:10 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:06:13.842 17:23:10 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:06:13.842 17:23:10 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:06:13.842 17:23:10 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:06:13.842 17:23:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:13.842 17:23:10 -- common/autotest_common.sh@10 -- $ set +x 00:06:13.842 ************************************ 00:06:13.842 START TEST autobuild_llvm_precompile 00:06:13.842 ************************************ 00:06:13.842 17:23:10 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:06:13.842 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:06:14.104 Target: x86_64-redhat-linux-gnu 00:06:14.104 Thread model: posix 00:06:14.104 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:06:14.104 17:23:10 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:14.364 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:14.364 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:14.624 Using 'verbs' RDMA provider 00:06:30.467 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:45.374 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:45.374 Creating mk/config.mk...done. 00:06:45.374 Creating mk/cc.flags.mk...done. 00:06:45.374 Type 'make' to build. 00:06:45.374 00:06:45.374 real 0m30.087s 00:06:45.374 user 0m13.350s 00:06:45.374 sys 0m16.195s 00:06:45.374 17:23:40 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:45.374 17:23:40 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:06:45.374 ************************************ 00:06:45.374 END TEST autobuild_llvm_precompile 00:06:45.374 ************************************ 00:06:45.374 17:23:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:45.374 17:23:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:45.374 17:23:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:45.374 17:23:41 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:06:45.374 17:23:41 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:45.374 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:45.374 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:45.374 Using 'verbs' RDMA provider 00:06:57.859 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:10.080 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:10.080 Creating mk/config.mk...done. 00:07:10.080 Creating mk/cc.flags.mk...done. 00:07:10.080 Type 'make' to build. 00:07:10.080 17:24:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:07:10.080 17:24:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:07:10.080 17:24:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:07:10.080 17:24:06 -- common/autotest_common.sh@10 -- $ set +x 00:07:10.080 ************************************ 00:07:10.080 START TEST make 00:07:10.080 ************************************ 00:07:10.080 17:24:06 make -- common/autotest_common.sh@1125 -- $ make -j72 00:07:10.080 make[1]: Nothing to be done for 'all'. 00:07:11.469 The Meson build system 00:07:11.469 Version: 1.5.0 00:07:11.469 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:07:11.469 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:11.469 Build type: native build 00:07:11.469 Project name: libvfio-user 00:07:11.469 Project version: 0.0.1 00:07:11.469 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:07:11.469 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:07:11.469 Host machine cpu family: x86_64 00:07:11.469 Host machine cpu: x86_64 00:07:11.469 Run-time dependency threads found: YES 00:07:11.469 Library dl found: YES 00:07:11.469 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:11.469 Run-time dependency json-c found: YES 0.17 00:07:11.469 Run-time dependency cmocka found: YES 1.1.7 00:07:11.469 Program pytest-3 found: NO 00:07:11.469 Program flake8 found: NO 00:07:11.469 Program misspell-fixer found: NO 00:07:11.469 Program restructuredtext-lint found: NO 00:07:11.469 Program valgrind found: YES (/usr/bin/valgrind) 00:07:11.469 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:11.469 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:11.469 Compiler for C supports arguments -Wwrite-strings: YES 00:07:11.469 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:11.469 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:07:11.469 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:07:11.469 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:11.469 Build targets in project: 8 00:07:11.469 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:11.469 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:11.469 00:07:11.469 libvfio-user 0.0.1 00:07:11.469 00:07:11.469 User defined options 00:07:11.469 buildtype : debug 00:07:11.469 default_library: static 00:07:11.469 libdir : /usr/local/lib 00:07:11.469 00:07:11.469 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:12.040 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:12.040 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:12.040 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:07:12.040 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:07:12.040 [4/36] Compiling C object samples/null.p/null.c.o 00:07:12.040 [5/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:12.040 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:07:12.040 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:12.040 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:07:12.040 [9/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:12.040 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:07:12.040 [11/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:12.040 [12/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:12.040 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:12.040 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:12.040 [15/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:07:12.040 [16/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:07:12.040 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:07:12.040 [18/36] Compiling C object test/unit_tests.p/mocks.c.o 00:07:12.040 [19/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:12.040 [20/36] Compiling C object samples/server.p/server.c.o 00:07:12.040 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:12.040 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:12.040 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:12.040 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:12.040 [25/36] Compiling C object samples/client.p/client.c.o 00:07:12.040 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:12.040 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:07:12.040 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:07:12.040 [29/36] Linking static target lib/libvfio-user.a 00:07:12.040 [30/36] Linking target samples/client 00:07:12.300 [31/36] Linking target samples/server 00:07:12.300 [32/36] Linking target samples/null 00:07:12.300 [33/36] Linking target samples/gpio-pci-idio-16 00:07:12.300 [34/36] Linking target test/unit_tests 00:07:12.300 [35/36] Linking target samples/lspci 00:07:12.300 [36/36] Linking target samples/shadow_ioeventfd_server 00:07:12.300 INFO: autodetecting backend as ninja 00:07:12.300 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:12.300 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:12.560 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:12.560 ninja: no work to do. 00:07:19.147 The Meson build system 00:07:19.147 Version: 1.5.0 00:07:19.147 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:07:19.147 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:07:19.147 Build type: native build 00:07:19.147 Program cat found: YES (/usr/bin/cat) 00:07:19.147 Project name: DPDK 00:07:19.147 Project version: 24.03.0 00:07:19.147 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:07:19.147 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:07:19.147 Host machine cpu family: x86_64 00:07:19.147 Host machine cpu: x86_64 00:07:19.147 Message: ## Building in Developer Mode ## 00:07:19.147 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:19.147 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:19.147 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:19.147 Program python3 found: YES (/usr/bin/python3) 00:07:19.147 Program cat found: YES (/usr/bin/cat) 00:07:19.147 Compiler for C supports arguments -march=native: YES 00:07:19.147 Checking for size of "void *" : 8 00:07:19.147 Checking for size of "void *" : 8 (cached) 00:07:19.147 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:19.147 Library m found: YES 00:07:19.147 Library numa found: YES 00:07:19.147 Has header "numaif.h" : YES 00:07:19.147 Library fdt found: NO 00:07:19.147 Library execinfo found: NO 00:07:19.147 Has header "execinfo.h" : YES 00:07:19.147 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:19.147 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:19.147 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:19.147 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:19.147 Run-time dependency openssl found: YES 3.1.1 00:07:19.147 Run-time dependency libpcap found: YES 1.10.4 00:07:19.147 Has header "pcap.h" with dependency libpcap: YES 00:07:19.147 Compiler for C supports arguments -Wcast-qual: YES 00:07:19.147 Compiler for C supports arguments -Wdeprecated: YES 00:07:19.147 Compiler for C supports arguments -Wformat: YES 00:07:19.147 Compiler for C supports arguments -Wformat-nonliteral: YES 00:07:19.147 Compiler for C supports arguments -Wformat-security: YES 00:07:19.147 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:19.147 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:19.147 Compiler for C supports arguments -Wnested-externs: YES 00:07:19.147 Compiler for C supports arguments -Wold-style-definition: YES 00:07:19.147 Compiler for C supports arguments -Wpointer-arith: YES 00:07:19.147 Compiler for C supports arguments -Wsign-compare: YES 00:07:19.147 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:19.147 Compiler for C supports arguments -Wundef: YES 00:07:19.147 Compiler for C supports arguments -Wwrite-strings: YES 00:07:19.147 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:19.147 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:07:19.147 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:19.147 Program objdump found: YES (/usr/bin/objdump) 00:07:19.147 Compiler for C supports arguments -mavx512f: YES 00:07:19.147 Checking if "AVX512 checking" compiles: YES 00:07:19.147 Fetching value of define "__SSE4_2__" : 1 00:07:19.147 Fetching value of define "__AES__" : 1 00:07:19.147 Fetching value of define "__AVX__" : 1 00:07:19.147 Fetching value of define "__AVX2__" : 1 00:07:19.147 Fetching value of define "__AVX512BW__" : 1 00:07:19.147 Fetching value of define "__AVX512CD__" : 1 00:07:19.147 Fetching value of define "__AVX512DQ__" : 1 00:07:19.147 Fetching value of define "__AVX512F__" : 1 00:07:19.147 Fetching value of define "__AVX512VL__" : 1 00:07:19.147 Fetching value of define "__PCLMUL__" : 1 00:07:19.147 Fetching value of define "__RDRND__" : 1 00:07:19.147 Fetching value of define "__RDSEED__" : 1 00:07:19.147 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:19.147 Fetching value of define "__znver1__" : (undefined) 00:07:19.147 Fetching value of define "__znver2__" : (undefined) 00:07:19.147 Fetching value of define "__znver3__" : (undefined) 00:07:19.147 Fetching value of define "__znver4__" : (undefined) 00:07:19.147 Compiler for C supports arguments -Wno-format-truncation: NO 00:07:19.147 Message: lib/log: Defining dependency "log" 00:07:19.147 Message: lib/kvargs: Defining dependency "kvargs" 00:07:19.147 Message: lib/telemetry: Defining dependency "telemetry" 00:07:19.147 Checking for function "getentropy" : NO 00:07:19.148 Message: lib/eal: Defining dependency "eal" 00:07:19.148 Message: lib/ring: Defining dependency "ring" 00:07:19.148 Message: lib/rcu: Defining dependency "rcu" 00:07:19.148 Message: lib/mempool: Defining dependency "mempool" 00:07:19.148 Message: lib/mbuf: Defining dependency "mbuf" 00:07:19.148 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:19.148 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:19.148 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:19.148 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:19.148 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:19.148 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:19.148 Compiler for C supports arguments -mpclmul: YES 00:07:19.148 Compiler for C supports arguments -maes: YES 00:07:19.148 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:19.148 Compiler for C supports arguments -mavx512bw: YES 00:07:19.148 Compiler for C supports arguments -mavx512dq: YES 00:07:19.148 Compiler for C supports arguments -mavx512vl: YES 00:07:19.148 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:19.148 Compiler for C supports arguments -mavx2: YES 00:07:19.148 Compiler for C supports arguments -mavx: YES 00:07:19.148 Message: lib/net: Defining dependency "net" 00:07:19.148 Message: lib/meter: Defining dependency "meter" 00:07:19.148 Message: lib/ethdev: Defining dependency "ethdev" 00:07:19.148 Message: lib/pci: Defining dependency "pci" 00:07:19.148 Message: lib/cmdline: Defining dependency "cmdline" 00:07:19.148 Message: lib/hash: Defining dependency "hash" 00:07:19.148 Message: lib/timer: Defining dependency "timer" 00:07:19.148 Message: lib/compressdev: Defining dependency "compressdev" 00:07:19.148 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:19.148 Message: lib/dmadev: Defining dependency "dmadev" 00:07:19.148 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:19.148 Message: lib/power: Defining dependency "power" 00:07:19.148 Message: lib/reorder: Defining dependency "reorder" 00:07:19.148 Message: lib/security: Defining dependency "security" 00:07:19.148 Has header "linux/userfaultfd.h" : YES 00:07:19.148 Has header "linux/vduse.h" : YES 00:07:19.148 Message: lib/vhost: Defining dependency "vhost" 00:07:19.148 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:07:19.148 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:19.148 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:19.148 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:19.148 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:19.148 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:19.148 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:19.148 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:19.148 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:19.148 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:19.148 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:19.148 Configuring doxy-api-html.conf using configuration 00:07:19.148 Configuring doxy-api-man.conf using configuration 00:07:19.148 Program mandb found: YES (/usr/bin/mandb) 00:07:19.148 Program sphinx-build found: NO 00:07:19.148 Configuring rte_build_config.h using configuration 00:07:19.148 Message: 00:07:19.148 ================= 00:07:19.148 Applications Enabled 00:07:19.148 ================= 00:07:19.148 00:07:19.148 apps: 00:07:19.148 00:07:19.148 00:07:19.148 Message: 00:07:19.148 ================= 00:07:19.148 Libraries Enabled 00:07:19.148 ================= 00:07:19.148 00:07:19.148 libs: 00:07:19.148 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:19.148 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:19.148 cryptodev, dmadev, power, reorder, security, vhost, 00:07:19.148 00:07:19.148 Message: 00:07:19.148 =============== 00:07:19.148 Drivers Enabled 00:07:19.148 =============== 00:07:19.148 00:07:19.148 common: 00:07:19.148 00:07:19.148 bus: 00:07:19.148 pci, vdev, 00:07:19.148 mempool: 00:07:19.148 ring, 00:07:19.148 dma: 00:07:19.148 00:07:19.148 net: 00:07:19.148 00:07:19.148 crypto: 00:07:19.148 00:07:19.148 compress: 00:07:19.148 00:07:19.148 vdpa: 00:07:19.148 00:07:19.148 00:07:19.148 Message: 00:07:19.148 ================= 00:07:19.148 Content Skipped 00:07:19.148 ================= 00:07:19.148 00:07:19.148 apps: 00:07:19.148 dumpcap: explicitly disabled via build config 00:07:19.148 graph: explicitly disabled via build config 00:07:19.148 pdump: explicitly disabled via build config 00:07:19.148 proc-info: explicitly disabled via build config 00:07:19.148 test-acl: explicitly disabled via build config 00:07:19.148 test-bbdev: explicitly disabled via build config 00:07:19.148 test-cmdline: explicitly disabled via build config 00:07:19.148 test-compress-perf: explicitly disabled via build config 00:07:19.148 test-crypto-perf: explicitly disabled via build config 00:07:19.148 test-dma-perf: explicitly disabled via build config 00:07:19.148 test-eventdev: explicitly disabled via build config 00:07:19.148 test-fib: explicitly disabled via build config 00:07:19.148 test-flow-perf: explicitly disabled via build config 00:07:19.148 test-gpudev: explicitly disabled via build config 00:07:19.148 test-mldev: explicitly disabled via build config 00:07:19.148 test-pipeline: explicitly disabled via build config 00:07:19.148 test-pmd: explicitly disabled via build config 00:07:19.148 test-regex: explicitly disabled via build config 00:07:19.148 test-sad: explicitly disabled via build config 00:07:19.148 test-security-perf: explicitly disabled via build config 00:07:19.148 00:07:19.148 libs: 00:07:19.148 argparse: explicitly disabled via build config 00:07:19.148 metrics: explicitly disabled via build config 00:07:19.148 acl: explicitly disabled via build config 00:07:19.148 bbdev: explicitly disabled via build config 00:07:19.148 bitratestats: explicitly disabled via build config 00:07:19.148 bpf: explicitly disabled via build config 00:07:19.148 cfgfile: explicitly disabled via build config 00:07:19.148 distributor: explicitly disabled via build config 00:07:19.148 efd: explicitly disabled via build config 00:07:19.148 eventdev: explicitly disabled via build config 00:07:19.148 dispatcher: explicitly disabled via build config 00:07:19.148 gpudev: explicitly disabled via build config 00:07:19.148 gro: explicitly disabled via build config 00:07:19.148 gso: explicitly disabled via build config 00:07:19.148 ip_frag: explicitly disabled via build config 00:07:19.148 jobstats: explicitly disabled via build config 00:07:19.148 latencystats: explicitly disabled via build config 00:07:19.148 lpm: explicitly disabled via build config 00:07:19.148 member: explicitly disabled via build config 00:07:19.148 pcapng: explicitly disabled via build config 00:07:19.148 rawdev: explicitly disabled via build config 00:07:19.148 regexdev: explicitly disabled via build config 00:07:19.148 mldev: explicitly disabled via build config 00:07:19.148 rib: explicitly disabled via build config 00:07:19.148 sched: explicitly disabled via build config 00:07:19.148 stack: explicitly disabled via build config 00:07:19.148 ipsec: explicitly disabled via build config 00:07:19.148 pdcp: explicitly disabled via build config 00:07:19.148 fib: explicitly disabled via build config 00:07:19.148 port: explicitly disabled via build config 00:07:19.148 pdump: explicitly disabled via build config 00:07:19.148 table: explicitly disabled via build config 00:07:19.148 pipeline: explicitly disabled via build config 00:07:19.148 graph: explicitly disabled via build config 00:07:19.148 node: explicitly disabled via build config 00:07:19.148 00:07:19.148 drivers: 00:07:19.148 common/cpt: not in enabled drivers build config 00:07:19.148 common/dpaax: not in enabled drivers build config 00:07:19.148 common/iavf: not in enabled drivers build config 00:07:19.148 common/idpf: not in enabled drivers build config 00:07:19.148 common/ionic: not in enabled drivers build config 00:07:19.148 common/mvep: not in enabled drivers build config 00:07:19.148 common/octeontx: not in enabled drivers build config 00:07:19.148 bus/auxiliary: not in enabled drivers build config 00:07:19.148 bus/cdx: not in enabled drivers build config 00:07:19.148 bus/dpaa: not in enabled drivers build config 00:07:19.148 bus/fslmc: not in enabled drivers build config 00:07:19.148 bus/ifpga: not in enabled drivers build config 00:07:19.148 bus/platform: not in enabled drivers build config 00:07:19.148 bus/uacce: not in enabled drivers build config 00:07:19.148 bus/vmbus: not in enabled drivers build config 00:07:19.148 common/cnxk: not in enabled drivers build config 00:07:19.148 common/mlx5: not in enabled drivers build config 00:07:19.148 common/nfp: not in enabled drivers build config 00:07:19.148 common/nitrox: not in enabled drivers build config 00:07:19.148 common/qat: not in enabled drivers build config 00:07:19.148 common/sfc_efx: not in enabled drivers build config 00:07:19.148 mempool/bucket: not in enabled drivers build config 00:07:19.148 mempool/cnxk: not in enabled drivers build config 00:07:19.148 mempool/dpaa: not in enabled drivers build config 00:07:19.148 mempool/dpaa2: not in enabled drivers build config 00:07:19.148 mempool/octeontx: not in enabled drivers build config 00:07:19.148 mempool/stack: not in enabled drivers build config 00:07:19.148 dma/cnxk: not in enabled drivers build config 00:07:19.148 dma/dpaa: not in enabled drivers build config 00:07:19.148 dma/dpaa2: not in enabled drivers build config 00:07:19.148 dma/hisilicon: not in enabled drivers build config 00:07:19.148 dma/idxd: not in enabled drivers build config 00:07:19.148 dma/ioat: not in enabled drivers build config 00:07:19.148 dma/skeleton: not in enabled drivers build config 00:07:19.148 net/af_packet: not in enabled drivers build config 00:07:19.148 net/af_xdp: not in enabled drivers build config 00:07:19.148 net/ark: not in enabled drivers build config 00:07:19.148 net/atlantic: not in enabled drivers build config 00:07:19.148 net/avp: not in enabled drivers build config 00:07:19.148 net/axgbe: not in enabled drivers build config 00:07:19.148 net/bnx2x: not in enabled drivers build config 00:07:19.148 net/bnxt: not in enabled drivers build config 00:07:19.148 net/bonding: not in enabled drivers build config 00:07:19.148 net/cnxk: not in enabled drivers build config 00:07:19.148 net/cpfl: not in enabled drivers build config 00:07:19.148 net/cxgbe: not in enabled drivers build config 00:07:19.148 net/dpaa: not in enabled drivers build config 00:07:19.148 net/dpaa2: not in enabled drivers build config 00:07:19.148 net/e1000: not in enabled drivers build config 00:07:19.148 net/ena: not in enabled drivers build config 00:07:19.148 net/enetc: not in enabled drivers build config 00:07:19.149 net/enetfec: not in enabled drivers build config 00:07:19.149 net/enic: not in enabled drivers build config 00:07:19.149 net/failsafe: not in enabled drivers build config 00:07:19.149 net/fm10k: not in enabled drivers build config 00:07:19.149 net/gve: not in enabled drivers build config 00:07:19.149 net/hinic: not in enabled drivers build config 00:07:19.149 net/hns3: not in enabled drivers build config 00:07:19.149 net/i40e: not in enabled drivers build config 00:07:19.149 net/iavf: not in enabled drivers build config 00:07:19.149 net/ice: not in enabled drivers build config 00:07:19.149 net/idpf: not in enabled drivers build config 00:07:19.149 net/igc: not in enabled drivers build config 00:07:19.149 net/ionic: not in enabled drivers build config 00:07:19.149 net/ipn3ke: not in enabled drivers build config 00:07:19.149 net/ixgbe: not in enabled drivers build config 00:07:19.149 net/mana: not in enabled drivers build config 00:07:19.149 net/memif: not in enabled drivers build config 00:07:19.149 net/mlx4: not in enabled drivers build config 00:07:19.149 net/mlx5: not in enabled drivers build config 00:07:19.149 net/mvneta: not in enabled drivers build config 00:07:19.149 net/mvpp2: not in enabled drivers build config 00:07:19.149 net/netvsc: not in enabled drivers build config 00:07:19.149 net/nfb: not in enabled drivers build config 00:07:19.149 net/nfp: not in enabled drivers build config 00:07:19.149 net/ngbe: not in enabled drivers build config 00:07:19.149 net/null: not in enabled drivers build config 00:07:19.149 net/octeontx: not in enabled drivers build config 00:07:19.149 net/octeon_ep: not in enabled drivers build config 00:07:19.149 net/pcap: not in enabled drivers build config 00:07:19.149 net/pfe: not in enabled drivers build config 00:07:19.149 net/qede: not in enabled drivers build config 00:07:19.149 net/ring: not in enabled drivers build config 00:07:19.149 net/sfc: not in enabled drivers build config 00:07:19.149 net/softnic: not in enabled drivers build config 00:07:19.149 net/tap: not in enabled drivers build config 00:07:19.149 net/thunderx: not in enabled drivers build config 00:07:19.149 net/txgbe: not in enabled drivers build config 00:07:19.149 net/vdev_netvsc: not in enabled drivers build config 00:07:19.149 net/vhost: not in enabled drivers build config 00:07:19.149 net/virtio: not in enabled drivers build config 00:07:19.149 net/vmxnet3: not in enabled drivers build config 00:07:19.149 raw/*: missing internal dependency, "rawdev" 00:07:19.149 crypto/armv8: not in enabled drivers build config 00:07:19.149 crypto/bcmfs: not in enabled drivers build config 00:07:19.149 crypto/caam_jr: not in enabled drivers build config 00:07:19.149 crypto/ccp: not in enabled drivers build config 00:07:19.149 crypto/cnxk: not in enabled drivers build config 00:07:19.149 crypto/dpaa_sec: not in enabled drivers build config 00:07:19.149 crypto/dpaa2_sec: not in enabled drivers build config 00:07:19.149 crypto/ipsec_mb: not in enabled drivers build config 00:07:19.149 crypto/mlx5: not in enabled drivers build config 00:07:19.149 crypto/mvsam: not in enabled drivers build config 00:07:19.149 crypto/nitrox: not in enabled drivers build config 00:07:19.149 crypto/null: not in enabled drivers build config 00:07:19.149 crypto/octeontx: not in enabled drivers build config 00:07:19.149 crypto/openssl: not in enabled drivers build config 00:07:19.149 crypto/scheduler: not in enabled drivers build config 00:07:19.149 crypto/uadk: not in enabled drivers build config 00:07:19.149 crypto/virtio: not in enabled drivers build config 00:07:19.149 compress/isal: not in enabled drivers build config 00:07:19.149 compress/mlx5: not in enabled drivers build config 00:07:19.149 compress/nitrox: not in enabled drivers build config 00:07:19.149 compress/octeontx: not in enabled drivers build config 00:07:19.149 compress/zlib: not in enabled drivers build config 00:07:19.149 regex/*: missing internal dependency, "regexdev" 00:07:19.149 ml/*: missing internal dependency, "mldev" 00:07:19.149 vdpa/ifc: not in enabled drivers build config 00:07:19.149 vdpa/mlx5: not in enabled drivers build config 00:07:19.149 vdpa/nfp: not in enabled drivers build config 00:07:19.149 vdpa/sfc: not in enabled drivers build config 00:07:19.149 event/*: missing internal dependency, "eventdev" 00:07:19.149 baseband/*: missing internal dependency, "bbdev" 00:07:19.149 gpu/*: missing internal dependency, "gpudev" 00:07:19.149 00:07:19.149 00:07:19.149 Build targets in project: 85 00:07:19.149 00:07:19.149 DPDK 24.03.0 00:07:19.149 00:07:19.149 User defined options 00:07:19.149 buildtype : debug 00:07:19.149 default_library : static 00:07:19.149 libdir : lib 00:07:19.149 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:19.149 c_args : -fPIC -Werror 00:07:19.149 c_link_args : 00:07:19.149 cpu_instruction_set: native 00:07:19.149 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:07:19.149 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:07:19.149 enable_docs : false 00:07:19.149 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:19.149 enable_kmods : false 00:07:19.149 max_lcores : 128 00:07:19.149 tests : false 00:07:19.149 00:07:19.149 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:19.149 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:07:19.149 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:19.149 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:19.413 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:19.413 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:19.413 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:19.413 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:19.413 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:19.413 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:19.413 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:19.413 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:19.413 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:19.413 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:19.413 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:19.413 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:19.413 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:19.413 [16/268] Linking static target lib/librte_log.a 00:07:19.413 [17/268] Linking static target lib/librte_kvargs.a 00:07:19.413 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:19.413 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:19.674 [20/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:19.674 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:19.674 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:19.674 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:19.674 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:19.934 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:19.934 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:19.934 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:19.934 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:19.934 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:19.934 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:19.934 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:19.934 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:19.934 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:19.934 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:19.934 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:19.934 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:19.934 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:19.934 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:19.934 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:19.934 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:19.934 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:19.934 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:19.934 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:19.934 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:19.934 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:19.934 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:19.934 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:19.934 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:19.934 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:19.934 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:19.934 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:19.934 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:19.934 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:19.934 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:19.934 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:19.934 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:19.934 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:19.934 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:19.934 [59/268] Linking static target lib/librte_telemetry.a 00:07:19.934 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:19.934 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:19.934 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:19.934 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:19.934 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:19.934 [65/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:19.934 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:19.934 [67/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:19.934 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:19.934 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:19.934 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:19.934 [71/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.934 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:19.934 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:19.934 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:19.934 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:19.934 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:19.934 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:19.934 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:19.934 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:19.934 [80/268] Linking static target lib/librte_pci.a 00:07:19.934 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:19.934 [82/268] Linking static target lib/librte_ring.a 00:07:19.934 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:19.934 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:19.934 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:19.934 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:19.934 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:19.934 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:19.934 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:19.934 [90/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:19.934 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:19.934 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:19.934 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:19.934 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:19.934 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:19.934 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:19.934 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:19.934 [98/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:19.934 [99/268] Linking static target lib/librte_eal.a 00:07:19.934 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:19.934 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:19.934 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:19.934 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:19.934 [104/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:19.934 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:19.934 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:20.197 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:20.197 [108/268] Linking static target lib/librte_mempool.a 00:07:20.197 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:20.197 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:20.197 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:20.197 [112/268] Linking static target lib/librte_rcu.a 00:07:20.197 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:20.197 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:20.197 [115/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:20.197 [116/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.197 [117/268] Linking static target lib/librte_net.a 00:07:20.456 [118/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.456 [119/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:20.456 [120/268] Linking static target lib/librte_meter.a 00:07:20.456 [121/268] Linking target lib/librte_log.so.24.1 00:07:20.456 [122/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:20.456 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.456 [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:20.456 [125/268] Linking static target lib/librte_mbuf.a 00:07:20.456 [126/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.456 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:20.456 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:20.456 [129/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:20.456 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:20.456 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:20.456 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:20.456 [133/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:20.456 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:20.456 [135/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:20.456 [136/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.456 [137/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:20.456 [138/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:20.456 [139/268] Linking static target lib/librte_cmdline.a 00:07:20.456 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:20.456 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:20.456 [142/268] Linking static target lib/librte_timer.a 00:07:20.456 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:20.456 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:20.456 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:20.456 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:20.456 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:20.456 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:20.716 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:20.716 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:20.716 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:20.716 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:20.716 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:20.716 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:20.716 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:20.716 [156/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:20.716 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:20.716 [158/268] Linking static target lib/librte_compressdev.a 00:07:20.716 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:20.716 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:20.716 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.716 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:20.716 [163/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:20.716 [164/268] Linking static target lib/librte_dmadev.a 00:07:20.716 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:20.716 [166/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:20.716 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:20.716 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:20.716 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:20.716 [170/268] Linking target lib/librte_kvargs.so.24.1 00:07:20.716 [171/268] Linking target lib/librte_telemetry.so.24.1 00:07:20.716 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:20.716 [173/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.716 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:20.716 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:20.716 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:20.716 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:20.716 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:20.716 [179/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:20.716 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:20.716 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:20.716 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:20.716 [183/268] Linking static target lib/librte_security.a 00:07:20.716 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:20.716 [185/268] Linking static target lib/librte_reorder.a 00:07:20.716 [186/268] Linking static target lib/librte_power.a 00:07:20.716 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:20.716 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:20.716 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:20.716 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:20.716 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:20.716 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:20.716 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:20.716 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:20.976 [195/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.976 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:20.976 [197/268] Linking static target lib/librte_hash.a 00:07:20.976 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:20.976 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:20.976 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:20.976 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:20.976 [202/268] Linking static target drivers/librte_bus_vdev.a 00:07:20.976 [203/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:20.976 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:20.976 [205/268] Linking static target lib/librte_cryptodev.a 00:07:20.976 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:20.976 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:20.976 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:20.976 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.976 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:20.976 [211/268] Linking static target drivers/librte_bus_pci.a 00:07:20.976 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:20.976 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:20.976 [214/268] Linking static target drivers/librte_mempool_ring.a 00:07:20.976 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:21.235 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.235 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.235 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:21.235 [219/268] Linking static target lib/librte_ethdev.a 00:07:21.235 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.235 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.235 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.494 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.753 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.753 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:22.012 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.012 [227/268] Linking static target lib/librte_vhost.a 00:07:22.012 [228/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.012 [229/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:23.392 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.331 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.462 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.722 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.722 [234/268] Linking target lib/librte_eal.so.24.1 00:07:32.982 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:32.982 [236/268] Linking target lib/librte_meter.so.24.1 00:07:32.982 [237/268] Linking target lib/librte_pci.so.24.1 00:07:32.982 [238/268] Linking target lib/librte_dmadev.so.24.1 00:07:32.982 [239/268] Linking target lib/librte_ring.so.24.1 00:07:32.982 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:32.982 [241/268] Linking target lib/librte_timer.so.24.1 00:07:33.241 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:33.241 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:33.241 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:33.241 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:33.241 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:33.241 [247/268] Linking target lib/librte_mempool.so.24.1 00:07:33.241 [248/268] Linking target lib/librte_rcu.so.24.1 00:07:33.241 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:33.241 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:33.241 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:33.501 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:33.501 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:33.501 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:33.501 [255/268] Linking target lib/librte_reorder.so.24.1 00:07:33.501 [256/268] Linking target lib/librte_net.so.24.1 00:07:33.501 [257/268] Linking target lib/librte_compressdev.so.24.1 00:07:33.501 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:33.760 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:33.760 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:33.760 [261/268] Linking target lib/librte_hash.so.24.1 00:07:33.760 [262/268] Linking target lib/librte_security.so.24.1 00:07:33.760 [263/268] Linking target lib/librte_cmdline.so.24.1 00:07:33.760 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:34.020 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:34.020 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:34.020 [267/268] Linking target lib/librte_power.so.24.1 00:07:34.020 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:34.020 INFO: autodetecting backend as ninja 00:07:34.020 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:07:35.401 CC lib/log/log.o 00:07:35.401 CC lib/log/log_flags.o 00:07:35.401 CC lib/ut/ut.o 00:07:35.401 CC lib/log/log_deprecated.o 00:07:35.401 CC lib/ut_mock/mock.o 00:07:35.401 LIB libspdk_ut.a 00:07:35.401 LIB libspdk_log.a 00:07:35.401 LIB libspdk_ut_mock.a 00:07:35.401 CC lib/ioat/ioat.o 00:07:35.660 CXX lib/trace_parser/trace.o 00:07:35.660 CC lib/dma/dma.o 00:07:35.661 CC lib/util/base64.o 00:07:35.661 CC lib/util/bit_array.o 00:07:35.661 CC lib/util/cpuset.o 00:07:35.661 CC lib/util/crc16.o 00:07:35.661 CC lib/util/crc32.o 00:07:35.661 CC lib/util/crc32c.o 00:07:35.661 CC lib/util/crc32_ieee.o 00:07:35.661 CC lib/util/crc64.o 00:07:35.661 CC lib/util/dif.o 00:07:35.661 CC lib/util/fd.o 00:07:35.661 CC lib/util/fd_group.o 00:07:35.661 CC lib/util/file.o 00:07:35.661 CC lib/util/hexlify.o 00:07:35.661 CC lib/util/iov.o 00:07:35.661 CC lib/util/math.o 00:07:35.661 CC lib/util/net.o 00:07:35.661 CC lib/util/pipe.o 00:07:35.661 CC lib/util/strerror_tls.o 00:07:35.661 CC lib/util/string.o 00:07:35.661 CC lib/util/uuid.o 00:07:35.661 CC lib/util/xor.o 00:07:35.661 CC lib/util/zipf.o 00:07:35.661 CC lib/util/md5.o 00:07:35.661 CC lib/vfio_user/host/vfio_user.o 00:07:35.661 CC lib/vfio_user/host/vfio_user_pci.o 00:07:35.661 LIB libspdk_dma.a 00:07:35.661 LIB libspdk_ioat.a 00:07:35.920 LIB libspdk_vfio_user.a 00:07:35.920 LIB libspdk_util.a 00:07:36.179 LIB libspdk_trace_parser.a 00:07:36.179 CC lib/env_dpdk/env.o 00:07:36.179 CC lib/env_dpdk/pci.o 00:07:36.179 CC lib/env_dpdk/memory.o 00:07:36.179 CC lib/env_dpdk/init.o 00:07:36.179 CC lib/env_dpdk/threads.o 00:07:36.179 CC lib/env_dpdk/pci_ioat.o 00:07:36.179 CC lib/env_dpdk/pci_virtio.o 00:07:36.179 CC lib/rdma_utils/rdma_utils.o 00:07:36.179 CC lib/env_dpdk/pci_vmd.o 00:07:36.179 CC lib/env_dpdk/pci_idxd.o 00:07:36.179 CC lib/rdma_provider/common.o 00:07:36.179 CC lib/vmd/vmd.o 00:07:36.179 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:36.179 CC lib/env_dpdk/pci_event.o 00:07:36.179 CC lib/vmd/led.o 00:07:36.179 CC lib/env_dpdk/sigbus_handler.o 00:07:36.179 CC lib/json/json_parse.o 00:07:36.179 CC lib/env_dpdk/pci_dpdk.o 00:07:36.179 CC lib/json/json_util.o 00:07:36.179 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:36.179 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:36.179 CC lib/json/json_write.o 00:07:36.179 CC lib/idxd/idxd.o 00:07:36.179 CC lib/conf/conf.o 00:07:36.179 CC lib/idxd/idxd_user.o 00:07:36.179 CC lib/idxd/idxd_kernel.o 00:07:36.439 LIB libspdk_rdma_provider.a 00:07:36.439 LIB libspdk_conf.a 00:07:36.439 LIB libspdk_rdma_utils.a 00:07:36.439 LIB libspdk_json.a 00:07:36.439 LIB libspdk_idxd.a 00:07:36.699 LIB libspdk_vmd.a 00:07:36.699 CC lib/jsonrpc/jsonrpc_server.o 00:07:36.699 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:36.699 CC lib/jsonrpc/jsonrpc_client.o 00:07:36.699 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:36.959 LIB libspdk_jsonrpc.a 00:07:37.219 LIB libspdk_env_dpdk.a 00:07:37.219 CC lib/rpc/rpc.o 00:07:37.219 LIB libspdk_rpc.a 00:07:37.789 CC lib/notify/notify.o 00:07:37.789 CC lib/notify/notify_rpc.o 00:07:37.789 CC lib/trace/trace.o 00:07:37.789 CC lib/trace/trace_flags.o 00:07:37.789 CC lib/keyring/keyring.o 00:07:37.789 CC lib/trace/trace_rpc.o 00:07:37.789 CC lib/keyring/keyring_rpc.o 00:07:37.789 LIB libspdk_notify.a 00:07:37.789 LIB libspdk_keyring.a 00:07:37.789 LIB libspdk_trace.a 00:07:38.049 CC lib/sock/sock.o 00:07:38.049 CC lib/thread/thread.o 00:07:38.049 CC lib/sock/sock_rpc.o 00:07:38.049 CC lib/thread/iobuf.o 00:07:38.309 LIB libspdk_sock.a 00:07:38.569 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:38.569 CC lib/nvme/nvme_ctrlr.o 00:07:38.829 CC lib/nvme/nvme_fabric.o 00:07:38.829 CC lib/nvme/nvme_ns_cmd.o 00:07:38.829 CC lib/nvme/nvme_ns.o 00:07:38.829 CC lib/nvme/nvme_pcie_common.o 00:07:38.829 CC lib/nvme/nvme_pcie.o 00:07:38.829 CC lib/nvme/nvme_qpair.o 00:07:38.829 CC lib/nvme/nvme.o 00:07:38.829 CC lib/nvme/nvme_quirks.o 00:07:38.829 CC lib/nvme/nvme_transport.o 00:07:38.829 CC lib/nvme/nvme_discovery.o 00:07:38.829 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:38.829 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:38.829 CC lib/nvme/nvme_tcp.o 00:07:38.829 CC lib/nvme/nvme_opal.o 00:07:38.829 CC lib/nvme/nvme_io_msg.o 00:07:38.829 CC lib/nvme/nvme_poll_group.o 00:07:38.829 CC lib/nvme/nvme_zns.o 00:07:38.829 CC lib/nvme/nvme_stubs.o 00:07:38.829 CC lib/nvme/nvme_auth.o 00:07:38.829 CC lib/nvme/nvme_cuse.o 00:07:38.829 CC lib/nvme/nvme_vfio_user.o 00:07:38.829 CC lib/nvme/nvme_rdma.o 00:07:38.829 LIB libspdk_thread.a 00:07:39.088 CC lib/blob/request.o 00:07:39.088 CC lib/init/json_config.o 00:07:39.088 CC lib/blob/blobstore.o 00:07:39.088 CC lib/blob/blob_bs_dev.o 00:07:39.088 CC lib/blob/zeroes.o 00:07:39.088 CC lib/fsdev/fsdev.o 00:07:39.088 CC lib/init/subsystem.o 00:07:39.088 CC lib/init/subsystem_rpc.o 00:07:39.088 CC lib/init/rpc.o 00:07:39.088 CC lib/fsdev/fsdev_io.o 00:07:39.088 CC lib/fsdev/fsdev_rpc.o 00:07:39.088 CC lib/vfu_tgt/tgt_endpoint.o 00:07:39.088 CC lib/vfu_tgt/tgt_rpc.o 00:07:39.088 CC lib/virtio/virtio.o 00:07:39.088 CC lib/virtio/virtio_vhost_user.o 00:07:39.088 CC lib/virtio/virtio_pci.o 00:07:39.088 CC lib/virtio/virtio_vfio_user.o 00:07:39.348 CC lib/accel/accel.o 00:07:39.348 CC lib/accel/accel_rpc.o 00:07:39.348 CC lib/accel/accel_sw.o 00:07:39.348 LIB libspdk_init.a 00:07:39.348 LIB libspdk_virtio.a 00:07:39.348 LIB libspdk_vfu_tgt.a 00:07:39.607 LIB libspdk_fsdev.a 00:07:39.607 CC lib/event/app.o 00:07:39.607 CC lib/event/reactor.o 00:07:39.607 CC lib/event/log_rpc.o 00:07:39.607 CC lib/event/app_rpc.o 00:07:39.607 CC lib/event/scheduler_static.o 00:07:39.866 LIB libspdk_event.a 00:07:39.866 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:39.866 LIB libspdk_accel.a 00:07:40.125 LIB libspdk_nvme.a 00:07:40.384 LIB libspdk_fuse_dispatcher.a 00:07:40.384 CC lib/bdev/bdev.o 00:07:40.384 CC lib/bdev/bdev_zone.o 00:07:40.384 CC lib/bdev/bdev_rpc.o 00:07:40.384 CC lib/bdev/part.o 00:07:40.384 CC lib/bdev/scsi_nvme.o 00:07:40.954 LIB libspdk_blob.a 00:07:41.214 CC lib/blobfs/blobfs.o 00:07:41.214 CC lib/lvol/lvol.o 00:07:41.214 CC lib/blobfs/tree.o 00:07:41.784 LIB libspdk_lvol.a 00:07:41.784 LIB libspdk_blobfs.a 00:07:42.043 LIB libspdk_bdev.a 00:07:42.306 CC lib/ublk/ublk.o 00:07:42.306 CC lib/ublk/ublk_rpc.o 00:07:42.306 CC lib/nbd/nbd.o 00:07:42.306 CC lib/nbd/nbd_rpc.o 00:07:42.306 CC lib/ftl/ftl_core.o 00:07:42.306 CC lib/scsi/dev.o 00:07:42.306 CC lib/ftl/ftl_init.o 00:07:42.306 CC lib/scsi/lun.o 00:07:42.306 CC lib/nvmf/ctrlr.o 00:07:42.306 CC lib/ftl/ftl_layout.o 00:07:42.306 CC lib/scsi/port.o 00:07:42.306 CC lib/ftl/ftl_debug.o 00:07:42.306 CC lib/nvmf/ctrlr_discovery.o 00:07:42.306 CC lib/nvmf/ctrlr_bdev.o 00:07:42.306 CC lib/scsi/scsi.o 00:07:42.306 CC lib/ftl/ftl_io.o 00:07:42.306 CC lib/scsi/scsi_bdev.o 00:07:42.306 CC lib/nvmf/subsystem.o 00:07:42.306 CC lib/ftl/ftl_sb.o 00:07:42.306 CC lib/nvmf/nvmf.o 00:07:42.306 CC lib/ftl/ftl_l2p.o 00:07:42.306 CC lib/ftl/ftl_l2p_flat.o 00:07:42.306 CC lib/ftl/ftl_nv_cache.o 00:07:42.306 CC lib/nvmf/nvmf_rpc.o 00:07:42.306 CC lib/scsi/scsi_pr.o 00:07:42.306 CC lib/scsi/scsi_rpc.o 00:07:42.306 CC lib/nvmf/transport.o 00:07:42.306 CC lib/scsi/task.o 00:07:42.306 CC lib/ftl/ftl_band.o 00:07:42.306 CC lib/nvmf/stubs.o 00:07:42.306 CC lib/ftl/ftl_band_ops.o 00:07:42.306 CC lib/ftl/ftl_writer.o 00:07:42.306 CC lib/nvmf/tcp.o 00:07:42.306 CC lib/nvmf/mdns_server.o 00:07:42.306 CC lib/ftl/ftl_rq.o 00:07:42.306 CC lib/ftl/ftl_l2p_cache.o 00:07:42.306 CC lib/nvmf/rdma.o 00:07:42.306 CC lib/nvmf/vfio_user.o 00:07:42.306 CC lib/ftl/ftl_reloc.o 00:07:42.306 CC lib/ftl/ftl_p2l.o 00:07:42.306 CC lib/nvmf/auth.o 00:07:42.306 CC lib/ftl/ftl_p2l_log.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:42.306 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:42.306 CC lib/ftl/utils/ftl_conf.o 00:07:42.306 CC lib/ftl/utils/ftl_md.o 00:07:42.306 CC lib/ftl/utils/ftl_mempool.o 00:07:42.306 CC lib/ftl/utils/ftl_bitmap.o 00:07:42.306 CC lib/ftl/utils/ftl_property.o 00:07:42.306 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:42.306 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:42.306 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:42.306 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:42.306 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:42.306 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:42.306 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:42.306 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:42.306 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:42.306 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:42.306 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:42.306 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:42.566 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:42.566 CC lib/ftl/base/ftl_base_dev.o 00:07:42.566 CC lib/ftl/base/ftl_base_bdev.o 00:07:42.566 CC lib/ftl/ftl_trace.o 00:07:42.825 LIB libspdk_nbd.a 00:07:42.825 LIB libspdk_scsi.a 00:07:43.085 LIB libspdk_ublk.a 00:07:43.085 CC lib/iscsi/conn.o 00:07:43.085 CC lib/vhost/vhost.o 00:07:43.085 CC lib/iscsi/init_grp.o 00:07:43.085 CC lib/iscsi/iscsi.o 00:07:43.085 CC lib/vhost/vhost_rpc.o 00:07:43.085 CC lib/vhost/vhost_scsi.o 00:07:43.085 CC lib/iscsi/param.o 00:07:43.085 CC lib/iscsi/portal_grp.o 00:07:43.085 CC lib/vhost/vhost_blk.o 00:07:43.085 CC lib/iscsi/tgt_node.o 00:07:43.085 CC lib/iscsi/iscsi_subsystem.o 00:07:43.085 CC lib/vhost/rte_vhost_user.o 00:07:43.085 CC lib/iscsi/iscsi_rpc.o 00:07:43.085 CC lib/iscsi/task.o 00:07:43.343 LIB libspdk_ftl.a 00:07:43.910 LIB libspdk_nvmf.a 00:07:43.910 LIB libspdk_vhost.a 00:07:43.910 LIB libspdk_iscsi.a 00:07:44.530 CC module/vfu_device/vfu_virtio.o 00:07:44.530 CC module/env_dpdk/env_dpdk_rpc.o 00:07:44.530 CC module/vfu_device/vfu_virtio_blk.o 00:07:44.530 CC module/vfu_device/vfu_virtio_scsi.o 00:07:44.530 CC module/vfu_device/vfu_virtio_fs.o 00:07:44.530 CC module/vfu_device/vfu_virtio_rpc.o 00:07:44.530 CC module/fsdev/aio/fsdev_aio.o 00:07:44.530 CC module/fsdev/aio/linux_aio_mgr.o 00:07:44.530 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:44.530 CC module/accel/dsa/accel_dsa.o 00:07:44.530 LIB libspdk_env_dpdk_rpc.a 00:07:44.530 CC module/keyring/file/keyring.o 00:07:44.530 CC module/accel/dsa/accel_dsa_rpc.o 00:07:44.530 CC module/accel/error/accel_error.o 00:07:44.530 CC module/keyring/file/keyring_rpc.o 00:07:44.530 CC module/accel/error/accel_error_rpc.o 00:07:44.530 CC module/accel/iaa/accel_iaa.o 00:07:44.530 CC module/sock/posix/posix.o 00:07:44.530 CC module/accel/ioat/accel_ioat_rpc.o 00:07:44.530 CC module/blob/bdev/blob_bdev.o 00:07:44.530 CC module/keyring/linux/keyring.o 00:07:44.530 CC module/accel/iaa/accel_iaa_rpc.o 00:07:44.530 CC module/keyring/linux/keyring_rpc.o 00:07:44.530 CC module/accel/ioat/accel_ioat.o 00:07:44.530 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:44.530 CC module/scheduler/gscheduler/gscheduler.o 00:07:44.530 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:44.790 LIB libspdk_keyring_linux.a 00:07:44.790 LIB libspdk_keyring_file.a 00:07:44.790 LIB libspdk_scheduler_gscheduler.a 00:07:44.790 LIB libspdk_accel_error.a 00:07:44.790 LIB libspdk_scheduler_dpdk_governor.a 00:07:44.790 LIB libspdk_accel_ioat.a 00:07:44.790 LIB libspdk_scheduler_dynamic.a 00:07:44.790 LIB libspdk_accel_iaa.a 00:07:44.790 LIB libspdk_blob_bdev.a 00:07:44.790 LIB libspdk_accel_dsa.a 00:07:44.790 LIB libspdk_vfu_device.a 00:07:45.050 LIB libspdk_fsdev_aio.a 00:07:45.050 LIB libspdk_sock_posix.a 00:07:45.050 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:45.050 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:45.050 CC module/bdev/nvme/bdev_nvme.o 00:07:45.050 CC module/bdev/aio/bdev_aio.o 00:07:45.050 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:45.050 CC module/bdev/aio/bdev_aio_rpc.o 00:07:45.050 CC module/bdev/nvme/nvme_rpc.o 00:07:45.050 CC module/bdev/nvme/vbdev_opal.o 00:07:45.050 CC module/bdev/error/vbdev_error.o 00:07:45.050 CC module/bdev/nvme/bdev_mdns_client.o 00:07:45.050 CC module/bdev/error/vbdev_error_rpc.o 00:07:45.050 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:45.050 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:45.050 CC module/bdev/malloc/bdev_malloc.o 00:07:45.050 CC module/blobfs/bdev/blobfs_bdev.o 00:07:45.050 CC module/bdev/null/bdev_null_rpc.o 00:07:45.050 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:45.050 CC module/bdev/null/bdev_null.o 00:07:45.050 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:45.050 CC module/bdev/delay/vbdev_delay.o 00:07:45.050 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:45.050 CC module/bdev/split/vbdev_split.o 00:07:45.050 CC module/bdev/lvol/vbdev_lvol.o 00:07:45.050 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:45.050 CC module/bdev/split/vbdev_split_rpc.o 00:07:45.050 CC module/bdev/gpt/gpt.o 00:07:45.050 CC module/bdev/gpt/vbdev_gpt.o 00:07:45.050 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:45.050 CC module/bdev/iscsi/bdev_iscsi.o 00:07:45.050 CC module/bdev/raid/bdev_raid.o 00:07:45.050 CC module/bdev/raid/bdev_raid_rpc.o 00:07:45.050 CC module/bdev/raid/raid1.o 00:07:45.050 CC module/bdev/raid/bdev_raid_sb.o 00:07:45.310 CC module/bdev/raid/raid0.o 00:07:45.310 CC module/bdev/raid/concat.o 00:07:45.310 CC module/bdev/passthru/vbdev_passthru.o 00:07:45.310 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:45.310 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:45.310 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:45.310 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:45.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:45.310 CC module/bdev/ftl/bdev_ftl.o 00:07:45.310 LIB libspdk_blobfs_bdev.a 00:07:45.310 LIB libspdk_bdev_error.a 00:07:45.310 LIB libspdk_bdev_split.a 00:07:45.310 LIB libspdk_bdev_null.a 00:07:45.310 LIB libspdk_bdev_aio.a 00:07:45.310 LIB libspdk_bdev_zone_block.a 00:07:45.570 LIB libspdk_bdev_delay.a 00:07:45.570 LIB libspdk_bdev_malloc.a 00:07:45.570 LIB libspdk_bdev_gpt.a 00:07:45.570 LIB libspdk_bdev_ftl.a 00:07:45.570 LIB libspdk_bdev_passthru.a 00:07:45.570 LIB libspdk_bdev_iscsi.a 00:07:45.570 LIB libspdk_bdev_virtio.a 00:07:45.570 LIB libspdk_bdev_lvol.a 00:07:45.830 LIB libspdk_bdev_raid.a 00:07:46.769 LIB libspdk_bdev_nvme.a 00:07:47.339 CC module/event/subsystems/sock/sock.o 00:07:47.339 CC module/event/subsystems/iobuf/iobuf.o 00:07:47.339 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:47.339 CC module/event/subsystems/scheduler/scheduler.o 00:07:47.339 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:47.339 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:47.339 CC module/event/subsystems/vmd/vmd.o 00:07:47.339 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:47.339 CC module/event/subsystems/keyring/keyring.o 00:07:47.339 CC module/event/subsystems/fsdev/fsdev.o 00:07:47.339 LIB libspdk_event_sock.a 00:07:47.339 LIB libspdk_event_vmd.a 00:07:47.339 LIB libspdk_event_iobuf.a 00:07:47.339 LIB libspdk_event_scheduler.a 00:07:47.339 LIB libspdk_event_vfu_tgt.a 00:07:47.339 LIB libspdk_event_vhost_blk.a 00:07:47.339 LIB libspdk_event_keyring.a 00:07:47.339 LIB libspdk_event_fsdev.a 00:07:47.599 CC module/event/subsystems/accel/accel.o 00:07:47.599 LIB libspdk_event_accel.a 00:07:48.169 CC module/event/subsystems/bdev/bdev.o 00:07:48.169 LIB libspdk_event_bdev.a 00:07:48.429 CC module/event/subsystems/ublk/ublk.o 00:07:48.429 CC module/event/subsystems/nbd/nbd.o 00:07:48.429 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:48.429 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:48.429 CC module/event/subsystems/scsi/scsi.o 00:07:48.690 LIB libspdk_event_nbd.a 00:07:48.690 LIB libspdk_event_ublk.a 00:07:48.690 LIB libspdk_event_scsi.a 00:07:48.690 LIB libspdk_event_nvmf.a 00:07:48.950 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:48.950 CC module/event/subsystems/iscsi/iscsi.o 00:07:48.950 LIB libspdk_event_vhost_scsi.a 00:07:48.950 LIB libspdk_event_iscsi.a 00:07:49.527 CC app/spdk_lspci/spdk_lspci.o 00:07:49.527 CC app/trace_record/trace_record.o 00:07:49.527 CC app/spdk_nvme_identify/identify.o 00:07:49.527 CC app/spdk_nvme_perf/perf.o 00:07:49.527 CC app/spdk_nvme_discover/discovery_aer.o 00:07:49.527 CXX app/trace/trace.o 00:07:49.527 CC test/rpc_client/rpc_client_test.o 00:07:49.527 CC app/spdk_top/spdk_top.o 00:07:49.527 TEST_HEADER include/spdk/assert.h 00:07:49.527 TEST_HEADER include/spdk/accel.h 00:07:49.527 TEST_HEADER include/spdk/accel_module.h 00:07:49.527 TEST_HEADER include/spdk/barrier.h 00:07:49.527 TEST_HEADER include/spdk/base64.h 00:07:49.527 TEST_HEADER include/spdk/bdev.h 00:07:49.527 TEST_HEADER include/spdk/bdev_module.h 00:07:49.527 TEST_HEADER include/spdk/bdev_zone.h 00:07:49.527 TEST_HEADER include/spdk/bit_pool.h 00:07:49.527 TEST_HEADER include/spdk/bit_array.h 00:07:49.527 TEST_HEADER include/spdk/blob_bdev.h 00:07:49.527 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:49.527 TEST_HEADER include/spdk/blobfs.h 00:07:49.527 TEST_HEADER include/spdk/blob.h 00:07:49.527 TEST_HEADER include/spdk/conf.h 00:07:49.527 TEST_HEADER include/spdk/config.h 00:07:49.527 TEST_HEADER include/spdk/cpuset.h 00:07:49.527 TEST_HEADER include/spdk/crc16.h 00:07:49.527 TEST_HEADER include/spdk/crc32.h 00:07:49.527 TEST_HEADER include/spdk/crc64.h 00:07:49.527 TEST_HEADER include/spdk/dif.h 00:07:49.527 TEST_HEADER include/spdk/dma.h 00:07:49.527 TEST_HEADER include/spdk/endian.h 00:07:49.527 TEST_HEADER include/spdk/env.h 00:07:49.527 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:49.527 TEST_HEADER include/spdk/event.h 00:07:49.527 TEST_HEADER include/spdk/env_dpdk.h 00:07:49.527 TEST_HEADER include/spdk/fd_group.h 00:07:49.527 TEST_HEADER include/spdk/fd.h 00:07:49.527 TEST_HEADER include/spdk/file.h 00:07:49.527 TEST_HEADER include/spdk/fsdev.h 00:07:49.527 TEST_HEADER include/spdk/fsdev_module.h 00:07:49.527 TEST_HEADER include/spdk/ftl.h 00:07:49.527 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:49.527 TEST_HEADER include/spdk/gpt_spec.h 00:07:49.527 TEST_HEADER include/spdk/hexlify.h 00:07:49.527 TEST_HEADER include/spdk/histogram_data.h 00:07:49.527 TEST_HEADER include/spdk/idxd.h 00:07:49.527 TEST_HEADER include/spdk/idxd_spec.h 00:07:49.527 TEST_HEADER include/spdk/init.h 00:07:49.527 TEST_HEADER include/spdk/ioat.h 00:07:49.527 TEST_HEADER include/spdk/ioat_spec.h 00:07:49.527 TEST_HEADER include/spdk/iscsi_spec.h 00:07:49.527 TEST_HEADER include/spdk/json.h 00:07:49.527 TEST_HEADER include/spdk/jsonrpc.h 00:07:49.527 TEST_HEADER include/spdk/keyring.h 00:07:49.527 TEST_HEADER include/spdk/keyring_module.h 00:07:49.527 TEST_HEADER include/spdk/likely.h 00:07:49.527 TEST_HEADER include/spdk/log.h 00:07:49.527 TEST_HEADER include/spdk/md5.h 00:07:49.527 TEST_HEADER include/spdk/memory.h 00:07:49.527 TEST_HEADER include/spdk/mmio.h 00:07:49.527 TEST_HEADER include/spdk/lvol.h 00:07:49.527 TEST_HEADER include/spdk/nbd.h 00:07:49.527 TEST_HEADER include/spdk/net.h 00:07:49.527 TEST_HEADER include/spdk/notify.h 00:07:49.527 TEST_HEADER include/spdk/nvme.h 00:07:49.527 TEST_HEADER include/spdk/nvme_intel.h 00:07:49.527 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:49.527 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:49.527 TEST_HEADER include/spdk/nvme_spec.h 00:07:49.527 TEST_HEADER include/spdk/nvme_zns.h 00:07:49.527 CC app/nvmf_tgt/nvmf_main.o 00:07:49.527 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:49.527 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:49.527 CC app/iscsi_tgt/iscsi_tgt.o 00:07:49.527 TEST_HEADER include/spdk/nvmf.h 00:07:49.527 TEST_HEADER include/spdk/nvmf_spec.h 00:07:49.527 TEST_HEADER include/spdk/nvmf_transport.h 00:07:49.527 TEST_HEADER include/spdk/opal.h 00:07:49.527 TEST_HEADER include/spdk/opal_spec.h 00:07:49.527 TEST_HEADER include/spdk/pci_ids.h 00:07:49.527 TEST_HEADER include/spdk/pipe.h 00:07:49.527 TEST_HEADER include/spdk/queue.h 00:07:49.527 TEST_HEADER include/spdk/reduce.h 00:07:49.527 TEST_HEADER include/spdk/rpc.h 00:07:49.527 TEST_HEADER include/spdk/scheduler.h 00:07:49.527 TEST_HEADER include/spdk/scsi.h 00:07:49.527 TEST_HEADER include/spdk/sock.h 00:07:49.527 TEST_HEADER include/spdk/scsi_spec.h 00:07:49.527 TEST_HEADER include/spdk/stdinc.h 00:07:49.527 TEST_HEADER include/spdk/string.h 00:07:49.527 TEST_HEADER include/spdk/thread.h 00:07:49.527 TEST_HEADER include/spdk/trace.h 00:07:49.527 TEST_HEADER include/spdk/trace_parser.h 00:07:49.527 TEST_HEADER include/spdk/tree.h 00:07:49.527 TEST_HEADER include/spdk/ublk.h 00:07:49.527 TEST_HEADER include/spdk/util.h 00:07:49.527 TEST_HEADER include/spdk/uuid.h 00:07:49.527 TEST_HEADER include/spdk/version.h 00:07:49.527 CC examples/ioat/perf/perf.o 00:07:49.527 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:49.527 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:49.527 TEST_HEADER include/spdk/vhost.h 00:07:49.527 TEST_HEADER include/spdk/vmd.h 00:07:49.527 TEST_HEADER include/spdk/xor.h 00:07:49.527 TEST_HEADER include/spdk/zipf.h 00:07:49.527 CXX test/cpp_headers/accel.o 00:07:49.527 CXX test/cpp_headers/accel_module.o 00:07:49.527 CXX test/cpp_headers/assert.o 00:07:49.527 CC examples/ioat/verify/verify.o 00:07:49.527 CXX test/cpp_headers/base64.o 00:07:49.527 CXX test/cpp_headers/barrier.o 00:07:49.527 CXX test/cpp_headers/bdev.o 00:07:49.528 CXX test/cpp_headers/bdev_module.o 00:07:49.528 CXX test/cpp_headers/bdev_zone.o 00:07:49.528 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:49.528 CXX test/cpp_headers/bit_array.o 00:07:49.528 CXX test/cpp_headers/blob_bdev.o 00:07:49.528 CXX test/cpp_headers/bit_pool.o 00:07:49.528 CXX test/cpp_headers/blobfs.o 00:07:49.528 CXX test/cpp_headers/blobfs_bdev.o 00:07:49.528 CXX test/cpp_headers/blob.o 00:07:49.528 CXX test/cpp_headers/config.o 00:07:49.528 CXX test/cpp_headers/conf.o 00:07:49.528 CXX test/cpp_headers/cpuset.o 00:07:49.528 CC app/spdk_tgt/spdk_tgt.o 00:07:49.528 CC test/thread/poller_perf/poller_perf.o 00:07:49.528 CXX test/cpp_headers/crc16.o 00:07:49.528 CXX test/cpp_headers/crc32.o 00:07:49.528 CC test/app/jsoncat/jsoncat.o 00:07:49.528 CXX test/cpp_headers/crc64.o 00:07:49.528 CC test/env/pci/pci_ut.o 00:07:49.528 CXX test/cpp_headers/dif.o 00:07:49.528 CXX test/cpp_headers/dma.o 00:07:49.528 CC test/app/histogram_perf/histogram_perf.o 00:07:49.528 CXX test/cpp_headers/endian.o 00:07:49.528 CXX test/cpp_headers/env_dpdk.o 00:07:49.528 CXX test/cpp_headers/event.o 00:07:49.528 CXX test/cpp_headers/env.o 00:07:49.528 CC test/env/memory/memory_ut.o 00:07:49.528 CC test/env/vtophys/vtophys.o 00:07:49.528 CXX test/cpp_headers/fd_group.o 00:07:49.528 CXX test/cpp_headers/fd.o 00:07:49.528 CC test/thread/lock/spdk_lock.o 00:07:49.528 CXX test/cpp_headers/file.o 00:07:49.528 CXX test/cpp_headers/fsdev.o 00:07:49.528 CXX test/cpp_headers/fsdev_module.o 00:07:49.528 CXX test/cpp_headers/ftl.o 00:07:49.528 CXX test/cpp_headers/fuse_dispatcher.o 00:07:49.528 CXX test/cpp_headers/gpt_spec.o 00:07:49.528 CXX test/cpp_headers/hexlify.o 00:07:49.528 CC examples/util/zipf/zipf.o 00:07:49.528 CXX test/cpp_headers/histogram_data.o 00:07:49.528 CXX test/cpp_headers/idxd.o 00:07:49.528 CC test/app/stub/stub.o 00:07:49.528 CC app/fio/nvme/fio_plugin.o 00:07:49.528 CC app/spdk_dd/spdk_dd.o 00:07:49.528 CXX test/cpp_headers/idxd_spec.o 00:07:49.528 LINK spdk_lspci 00:07:49.528 CC test/dma/test_dma/test_dma.o 00:07:49.528 CC test/app/bdev_svc/bdev_svc.o 00:07:49.528 CC app/fio/bdev/fio_plugin.o 00:07:49.528 CC test/env/mem_callbacks/mem_callbacks.o 00:07:49.528 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:49.528 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:49.528 LINK rpc_client_test 00:07:49.528 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:49.528 LINK spdk_nvme_discover 00:07:49.528 LINK spdk_trace_record 00:07:49.528 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:07:49.528 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:07:49.528 LINK interrupt_tgt 00:07:49.528 LINK jsoncat 00:07:49.528 LINK histogram_perf 00:07:49.528 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:49.528 LINK poller_perf 00:07:49.528 LINK vtophys 00:07:49.528 LINK env_dpdk_post_init 00:07:49.528 LINK zipf 00:07:49.528 CXX test/cpp_headers/init.o 00:07:49.528 CXX test/cpp_headers/ioat.o 00:07:49.528 CXX test/cpp_headers/ioat_spec.o 00:07:49.528 CXX test/cpp_headers/iscsi_spec.o 00:07:49.795 CXX test/cpp_headers/json.o 00:07:49.795 CXX test/cpp_headers/jsonrpc.o 00:07:49.795 CXX test/cpp_headers/keyring.o 00:07:49.795 CXX test/cpp_headers/keyring_module.o 00:07:49.795 CXX test/cpp_headers/likely.o 00:07:49.795 CXX test/cpp_headers/log.o 00:07:49.795 CXX test/cpp_headers/lvol.o 00:07:49.795 CXX test/cpp_headers/md5.o 00:07:49.795 CXX test/cpp_headers/memory.o 00:07:49.795 CXX test/cpp_headers/mmio.o 00:07:49.795 CXX test/cpp_headers/nbd.o 00:07:49.795 CXX test/cpp_headers/net.o 00:07:49.795 CXX test/cpp_headers/notify.o 00:07:49.795 CXX test/cpp_headers/nvme.o 00:07:49.795 CXX test/cpp_headers/nvme_intel.o 00:07:49.795 CXX test/cpp_headers/nvme_ocssd.o 00:07:49.795 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:49.795 CXX test/cpp_headers/nvme_spec.o 00:07:49.795 CXX test/cpp_headers/nvme_zns.o 00:07:49.795 CXX test/cpp_headers/nvmf_cmd.o 00:07:49.795 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:49.795 CXX test/cpp_headers/nvmf.o 00:07:49.795 CXX test/cpp_headers/nvmf_spec.o 00:07:49.795 CXX test/cpp_headers/nvmf_transport.o 00:07:49.795 CXX test/cpp_headers/opal.o 00:07:49.795 CXX test/cpp_headers/opal_spec.o 00:07:49.795 CXX test/cpp_headers/pci_ids.o 00:07:49.795 LINK nvmf_tgt 00:07:49.795 CXX test/cpp_headers/pipe.o 00:07:49.795 CXX test/cpp_headers/queue.o 00:07:49.795 CXX test/cpp_headers/reduce.o 00:07:49.795 LINK ioat_perf 00:07:49.795 CXX test/cpp_headers/rpc.o 00:07:49.795 LINK stub 00:07:49.795 LINK iscsi_tgt 00:07:49.795 CXX test/cpp_headers/scheduler.o 00:07:49.795 LINK verify 00:07:49.795 CXX test/cpp_headers/scsi.o 00:07:49.795 CXX test/cpp_headers/scsi_spec.o 00:07:49.795 CXX test/cpp_headers/sock.o 00:07:49.795 LINK spdk_tgt 00:07:49.795 LINK bdev_svc 00:07:49.795 LINK spdk_trace 00:07:49.795 CXX test/cpp_headers/stdinc.o 00:07:49.795 CXX test/cpp_headers/string.o 00:07:49.795 CXX test/cpp_headers/thread.o 00:07:49.795 CXX test/cpp_headers/trace.o 00:07:49.795 CXX test/cpp_headers/trace_parser.o 00:07:49.795 CXX test/cpp_headers/tree.o 00:07:49.795 CXX test/cpp_headers/ublk.o 00:07:49.795 CXX test/cpp_headers/util.o 00:07:49.795 CXX test/cpp_headers/uuid.o 00:07:49.795 CXX test/cpp_headers/version.o 00:07:49.795 CXX test/cpp_headers/vfio_user_pci.o 00:07:49.795 CXX test/cpp_headers/vfio_user_spec.o 00:07:49.795 CXX test/cpp_headers/vhost.o 00:07:49.795 CXX test/cpp_headers/vmd.o 00:07:49.795 CXX test/cpp_headers/xor.o 00:07:49.795 CXX test/cpp_headers/zipf.o 00:07:50.054 LINK pci_ut 00:07:50.054 LINK nvme_fuzz 00:07:50.054 LINK llvm_vfio_fuzz 00:07:50.054 LINK spdk_dd 00:07:50.054 LINK test_dma 00:07:50.054 LINK spdk_bdev 00:07:50.054 LINK spdk_nvme_identify 00:07:50.054 LINK vhost_fuzz 00:07:50.313 LINK mem_callbacks 00:07:50.313 LINK spdk_nvme_perf 00:07:50.313 LINK spdk_nvme 00:07:50.313 LINK spdk_top 00:07:50.313 CC examples/vmd/led/led.o 00:07:50.313 CC examples/sock/hello_world/hello_sock.o 00:07:50.313 CC examples/vmd/lsvmd/lsvmd.o 00:07:50.313 CC examples/idxd/perf/perf.o 00:07:50.313 LINK llvm_nvme_fuzz 00:07:50.313 CC examples/thread/thread/thread_ex.o 00:07:50.313 LINK led 00:07:50.313 LINK lsvmd 00:07:50.573 LINK hello_sock 00:07:50.573 CC app/vhost/vhost.o 00:07:50.573 LINK idxd_perf 00:07:50.573 LINK thread 00:07:50.573 LINK memory_ut 00:07:50.573 LINK vhost 00:07:50.833 LINK spdk_lock 00:07:50.833 LINK iscsi_fuzz 00:07:51.092 CC examples/nvme/hello_world/hello_world.o 00:07:51.092 CC examples/nvme/hotplug/hotplug.o 00:07:51.092 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:51.092 CC examples/nvme/abort/abort.o 00:07:51.092 CC examples/nvme/reconnect/reconnect.o 00:07:51.092 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:51.092 CC examples/nvme/arbitration/arbitration.o 00:07:51.092 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:51.350 LINK pmr_persistence 00:07:51.350 LINK hello_world 00:07:51.350 LINK cmb_copy 00:07:51.350 LINK hotplug 00:07:51.350 LINK reconnect 00:07:51.350 LINK abort 00:07:51.350 CC test/event/reactor/reactor.o 00:07:51.350 CC test/event/reactor_perf/reactor_perf.o 00:07:51.350 CC test/event/event_perf/event_perf.o 00:07:51.350 LINK arbitration 00:07:51.350 CC test/event/app_repeat/app_repeat.o 00:07:51.350 LINK nvme_manage 00:07:51.608 CC test/event/scheduler/scheduler.o 00:07:51.608 LINK reactor_perf 00:07:51.608 LINK reactor 00:07:51.608 LINK event_perf 00:07:51.608 LINK app_repeat 00:07:51.608 LINK scheduler 00:07:51.867 CC test/nvme/e2edp/nvme_dp.o 00:07:51.867 CC test/nvme/reserve/reserve.o 00:07:51.867 CC test/nvme/aer/aer.o 00:07:51.867 CC test/nvme/err_injection/err_injection.o 00:07:51.867 CC test/nvme/overhead/overhead.o 00:07:51.867 CC test/nvme/sgl/sgl.o 00:07:51.867 CC test/nvme/cuse/cuse.o 00:07:51.867 CC test/nvme/boot_partition/boot_partition.o 00:07:51.867 CC test/nvme/fdp/fdp.o 00:07:51.867 CC test/nvme/connect_stress/connect_stress.o 00:07:51.867 CC test/nvme/reset/reset.o 00:07:51.867 CC test/nvme/simple_copy/simple_copy.o 00:07:51.867 CC test/nvme/startup/startup.o 00:07:51.867 CC test/nvme/compliance/nvme_compliance.o 00:07:51.867 CC test/nvme/fused_ordering/fused_ordering.o 00:07:51.867 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:51.867 CC test/blobfs/mkfs/mkfs.o 00:07:51.867 CC test/accel/dif/dif.o 00:07:51.867 CC test/lvol/esnap/esnap.o 00:07:51.867 LINK boot_partition 00:07:51.867 LINK connect_stress 00:07:51.867 LINK startup 00:07:51.867 LINK reserve 00:07:51.867 LINK doorbell_aers 00:07:51.867 LINK err_injection 00:07:51.867 LINK simple_copy 00:07:51.867 LINK nvme_dp 00:07:51.867 LINK mkfs 00:07:51.867 LINK reset 00:07:51.867 LINK sgl 00:07:51.867 LINK overhead 00:07:51.867 LINK fdp 00:07:52.126 LINK fused_ordering 00:07:52.126 LINK aer 00:07:52.126 LINK nvme_compliance 00:07:52.385 LINK dif 00:07:52.385 CC examples/accel/perf/accel_perf.o 00:07:52.385 CC examples/blob/hello_world/hello_blob.o 00:07:52.385 CC examples/blob/cli/blobcli.o 00:07:52.385 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:52.645 LINK hello_blob 00:07:52.645 LINK hello_fsdev 00:07:52.645 LINK accel_perf 00:07:52.645 LINK cuse 00:07:52.645 LINK blobcli 00:07:53.586 CC examples/bdev/hello_world/hello_bdev.o 00:07:53.586 CC examples/bdev/bdevperf/bdevperf.o 00:07:53.586 LINK hello_bdev 00:07:53.846 LINK bdevperf 00:07:53.846 CC test/bdev/bdevio/bdevio.o 00:07:54.105 LINK bdevio 00:07:55.485 LINK esnap 00:07:55.485 CC examples/nvmf/nvmf/nvmf.o 00:07:55.744 LINK nvmf 00:07:57.127 00:07:57.127 real 0m47.338s 00:07:57.127 user 6m58.478s 00:07:57.127 sys 2m22.179s 00:07:57.127 17:24:53 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:07:57.127 17:24:53 make -- common/autotest_common.sh@10 -- $ set +x 00:07:57.127 ************************************ 00:07:57.127 END TEST make 00:07:57.127 ************************************ 00:07:57.127 17:24:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:57.127 17:24:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:57.127 17:24:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:57.127 17:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.127 17:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:57.127 17:24:53 -- pm/common@44 -- $ pid=1992047 00:07:57.127 17:24:53 -- pm/common@50 -- $ kill -TERM 1992047 00:07:57.127 17:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.127 17:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:57.127 17:24:53 -- pm/common@44 -- $ pid=1992049 00:07:57.127 17:24:53 -- pm/common@50 -- $ kill -TERM 1992049 00:07:57.127 17:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.127 17:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:57.127 17:24:53 -- pm/common@44 -- $ pid=1992051 00:07:57.127 17:24:53 -- pm/common@50 -- $ kill -TERM 1992051 00:07:57.127 17:24:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.127 17:24:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:57.127 17:24:53 -- pm/common@44 -- $ pid=1992074 00:07:57.127 17:24:53 -- pm/common@50 -- $ sudo -E kill -TERM 1992074 00:07:57.127 17:24:54 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:57.127 17:24:54 -- common/autotest_common.sh@1691 -- # lcov --version 00:07:57.127 17:24:54 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:57.127 17:24:54 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:57.127 17:24:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.127 17:24:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.127 17:24:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.127 17:24:54 -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.127 17:24:54 -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.127 17:24:54 -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.127 17:24:54 -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.127 17:24:54 -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.127 17:24:54 -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.127 17:24:54 -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.127 17:24:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.127 17:24:54 -- scripts/common.sh@344 -- # case "$op" in 00:07:57.127 17:24:54 -- scripts/common.sh@345 -- # : 1 00:07:57.127 17:24:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.127 17:24:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.127 17:24:54 -- scripts/common.sh@365 -- # decimal 1 00:07:57.127 17:24:54 -- scripts/common.sh@353 -- # local d=1 00:07:57.127 17:24:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.127 17:24:54 -- scripts/common.sh@355 -- # echo 1 00:07:57.127 17:24:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.127 17:24:54 -- scripts/common.sh@366 -- # decimal 2 00:07:57.127 17:24:54 -- scripts/common.sh@353 -- # local d=2 00:07:57.128 17:24:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.128 17:24:54 -- scripts/common.sh@355 -- # echo 2 00:07:57.128 17:24:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.128 17:24:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.128 17:24:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.128 17:24:54 -- scripts/common.sh@368 -- # return 0 00:07:57.128 17:24:54 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.128 17:24:54 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.128 --rc genhtml_branch_coverage=1 00:07:57.128 --rc genhtml_function_coverage=1 00:07:57.128 --rc genhtml_legend=1 00:07:57.128 --rc geninfo_all_blocks=1 00:07:57.128 --rc geninfo_unexecuted_blocks=1 00:07:57.128 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:57.128 ' 00:07:57.128 17:24:54 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.128 --rc genhtml_branch_coverage=1 00:07:57.128 --rc genhtml_function_coverage=1 00:07:57.128 --rc genhtml_legend=1 00:07:57.128 --rc geninfo_all_blocks=1 00:07:57.128 --rc geninfo_unexecuted_blocks=1 00:07:57.128 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:57.128 ' 00:07:57.128 17:24:54 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.128 --rc genhtml_branch_coverage=1 00:07:57.128 --rc genhtml_function_coverage=1 00:07:57.128 --rc genhtml_legend=1 00:07:57.128 --rc geninfo_all_blocks=1 00:07:57.128 --rc geninfo_unexecuted_blocks=1 00:07:57.128 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:57.128 ' 00:07:57.128 17:24:54 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:57.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.128 --rc genhtml_branch_coverage=1 00:07:57.128 --rc genhtml_function_coverage=1 00:07:57.128 --rc genhtml_legend=1 00:07:57.128 --rc geninfo_all_blocks=1 00:07:57.128 --rc geninfo_unexecuted_blocks=1 00:07:57.128 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:57.128 ' 00:07:57.128 17:24:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.128 17:24:54 -- nvmf/common.sh@7 -- # uname -s 00:07:57.128 17:24:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.128 17:24:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.128 17:24:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.128 17:24:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.128 17:24:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.128 17:24:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.128 17:24:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.128 17:24:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.128 17:24:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.128 17:24:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.128 17:24:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:57.128 17:24:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:57.128 17:24:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.128 17:24:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.128 17:24:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:57.128 17:24:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.128 17:24:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:57.128 17:24:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.128 17:24:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.128 17:24:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.128 17:24:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.128 17:24:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.128 17:24:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.128 17:24:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.128 17:24:54 -- paths/export.sh@5 -- # export PATH 00:07:57.128 17:24:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.128 17:24:54 -- nvmf/common.sh@51 -- # : 0 00:07:57.128 17:24:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.128 17:24:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.128 17:24:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.128 17:24:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.128 17:24:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.128 17:24:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.128 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.128 17:24:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.128 17:24:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.128 17:24:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.128 17:24:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:57.128 17:24:54 -- spdk/autotest.sh@32 -- # uname -s 00:07:57.128 17:24:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:57.128 17:24:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:57.128 17:24:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:07:57.128 17:24:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:57.128 17:24:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:07:57.128 17:24:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:57.128 17:24:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:57.128 17:24:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:57.128 17:24:54 -- spdk/autotest.sh@48 -- # udevadm_pid=2051661 00:07:57.128 17:24:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:57.128 17:24:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:57.128 17:24:54 -- pm/common@17 -- # local monitor 00:07:57.128 17:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.128 17:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.128 17:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.128 17:24:54 -- pm/common@21 -- # date +%s 00:07:57.128 17:24:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.128 17:24:54 -- pm/common@21 -- # date +%s 00:07:57.128 17:24:54 -- pm/common@25 -- # sleep 1 00:07:57.128 17:24:54 -- pm/common@21 -- # date +%s 00:07:57.128 17:24:54 -- pm/common@21 -- # date +%s 00:07:57.128 17:24:54 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919494 00:07:57.128 17:24:54 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919494 00:07:57.128 17:24:54 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919494 00:07:57.128 17:24:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919494 00:07:57.388 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919494_collect-cpu-load.pm.log 00:07:57.388 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919494_collect-vmstat.pm.log 00:07:57.388 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919494_collect-cpu-temp.pm.log 00:07:57.388 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919494_collect-bmc-pm.bmc.pm.log 00:07:58.328 17:24:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:58.328 17:24:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:58.328 17:24:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:58.328 17:24:55 -- common/autotest_common.sh@10 -- # set +x 00:07:58.328 17:24:55 -- spdk/autotest.sh@59 -- # create_test_list 00:07:58.328 17:24:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:07:58.328 17:24:55 -- common/autotest_common.sh@10 -- # set +x 00:07:58.328 17:24:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:07:58.328 17:24:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:58.328 17:24:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:58.328 17:24:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:58.328 17:24:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:58.328 17:24:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:58.328 17:24:55 -- common/autotest_common.sh@1455 -- # uname 00:07:58.328 17:24:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:58.328 17:24:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:58.328 17:24:55 -- common/autotest_common.sh@1475 -- # uname 00:07:58.328 17:24:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:58.328 17:24:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:58.328 17:24:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:07:58.328 lcov: LCOV version 1.15 00:07:58.328 17:24:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:08:06.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:10.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:08:13.949 17:25:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:13.949 17:25:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.949 17:25:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.949 17:25:10 -- spdk/autotest.sh@78 -- # rm -f 00:08:13.949 17:25:10 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:17.244 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:08:17.244 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:08:17.244 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:08:17.504 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:08:17.504 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:08:17.504 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:08:17.504 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:08:17.504 17:25:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:17.504 17:25:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:17.504 17:25:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:17.504 17:25:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:17.504 17:25:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:17.504 17:25:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:17.504 17:25:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:17.504 17:25:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:17.504 17:25:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:17.504 17:25:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:17.504 17:25:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:17.504 17:25:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:17.504 17:25:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:17.504 17:25:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:17.504 17:25:14 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:17.504 No valid GPT data, bailing 00:08:17.504 17:25:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:17.504 17:25:14 -- scripts/common.sh@394 -- # pt= 00:08:17.504 17:25:14 -- scripts/common.sh@395 -- # return 1 00:08:17.504 17:25:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:17.504 1+0 records in 00:08:17.504 1+0 records out 00:08:17.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00173016 s, 606 MB/s 00:08:17.504 17:25:14 -- spdk/autotest.sh@105 -- # sync 00:08:17.504 17:25:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:17.504 17:25:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:17.504 17:25:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:22.782 17:25:19 -- spdk/autotest.sh@111 -- # uname -s 00:08:22.782 17:25:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:22.782 17:25:19 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:08:22.782 17:25:19 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:08:22.782 17:25:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.782 17:25:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.782 17:25:19 -- common/autotest_common.sh@10 -- # set +x 00:08:22.782 ************************************ 00:08:22.782 START TEST setup.sh 00:08:22.782 ************************************ 00:08:22.782 17:25:19 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:08:22.782 * Looking for test storage... 00:08:22.782 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:08:22.782 17:25:19 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.782 17:25:19 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.782 17:25:19 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@345 -- # : 1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@353 -- # local d=1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@355 -- # echo 1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@353 -- # local d=2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@355 -- # echo 2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.064 17:25:19 setup.sh -- scripts/common.sh@368 -- # return 0 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:23.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.064 --rc genhtml_branch_coverage=1 00:08:23.064 --rc genhtml_function_coverage=1 00:08:23.064 --rc genhtml_legend=1 00:08:23.064 --rc geninfo_all_blocks=1 00:08:23.064 --rc geninfo_unexecuted_blocks=1 00:08:23.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.064 ' 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:23.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.064 --rc genhtml_branch_coverage=1 00:08:23.064 --rc genhtml_function_coverage=1 00:08:23.064 --rc genhtml_legend=1 00:08:23.064 --rc geninfo_all_blocks=1 00:08:23.064 --rc geninfo_unexecuted_blocks=1 00:08:23.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.064 ' 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:23.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.064 --rc genhtml_branch_coverage=1 00:08:23.064 --rc genhtml_function_coverage=1 00:08:23.064 --rc genhtml_legend=1 00:08:23.064 --rc geninfo_all_blocks=1 00:08:23.064 --rc geninfo_unexecuted_blocks=1 00:08:23.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.064 ' 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:23.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.064 --rc genhtml_branch_coverage=1 00:08:23.064 --rc genhtml_function_coverage=1 00:08:23.064 --rc genhtml_legend=1 00:08:23.064 --rc geninfo_all_blocks=1 00:08:23.064 --rc geninfo_unexecuted_blocks=1 00:08:23.064 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.064 ' 00:08:23.064 17:25:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:08:23.064 17:25:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:23.064 17:25:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.064 17:25:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:23.064 ************************************ 00:08:23.064 START TEST acl 00:08:23.064 ************************************ 00:08:23.064 17:25:20 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:08:23.064 * Looking for test storage... 00:08:23.064 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:08:23.064 17:25:20 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:23.064 17:25:20 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:08:23.064 17:25:20 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:23.413 17:25:20 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.413 17:25:20 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.414 17:25:20 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.414 --rc genhtml_branch_coverage=1 00:08:23.414 --rc genhtml_function_coverage=1 00:08:23.414 --rc genhtml_legend=1 00:08:23.414 --rc geninfo_all_blocks=1 00:08:23.414 --rc geninfo_unexecuted_blocks=1 00:08:23.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.414 ' 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.414 --rc genhtml_branch_coverage=1 00:08:23.414 --rc genhtml_function_coverage=1 00:08:23.414 --rc genhtml_legend=1 00:08:23.414 --rc geninfo_all_blocks=1 00:08:23.414 --rc geninfo_unexecuted_blocks=1 00:08:23.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.414 ' 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.414 --rc genhtml_branch_coverage=1 00:08:23.414 --rc genhtml_function_coverage=1 00:08:23.414 --rc genhtml_legend=1 00:08:23.414 --rc geninfo_all_blocks=1 00:08:23.414 --rc geninfo_unexecuted_blocks=1 00:08:23.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.414 ' 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:23.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.414 --rc genhtml_branch_coverage=1 00:08:23.414 --rc genhtml_function_coverage=1 00:08:23.414 --rc genhtml_legend=1 00:08:23.414 --rc geninfo_all_blocks=1 00:08:23.414 --rc geninfo_unexecuted_blocks=1 00:08:23.414 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.414 ' 00:08:23.414 17:25:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:23.414 17:25:20 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:23.414 17:25:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:08:23.414 17:25:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:08:23.414 17:25:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:08:23.414 17:25:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:08:23.414 17:25:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:08:23.414 17:25:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:23.414 17:25:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:26.876 17:25:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:08:26.876 17:25:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:08:26.876 17:25:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:26.876 17:25:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:08:26.876 17:25:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:08:26.876 17:25:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:08:30.166 Hugepages 00:08:30.166 node hugesize free / total 00:08:30.166 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 00:08:30.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:08:30.167 17:25:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:08:30.167 17:25:27 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.167 17:25:27 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.167 17:25:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:30.167 ************************************ 00:08:30.167 START TEST denied 00:08:30.167 ************************************ 00:08:30.167 17:25:27 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:08:30.167 17:25:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:08:30.167 17:25:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:08:30.167 17:25:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:08:30.167 17:25:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:08:30.167 17:25:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:08:34.360 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:08:34.360 17:25:30 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:08:34.360 17:25:30 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:08:34.360 17:25:30 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:08:34.360 17:25:30 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:08:34.360 17:25:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:08:34.361 17:25:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:34.361 17:25:30 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:34.361 17:25:30 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:08:34.361 17:25:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:34.361 17:25:30 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:38.552 00:08:38.552 real 0m8.019s 00:08:38.552 user 0m2.602s 00:08:38.552 sys 0m4.725s 00:08:38.552 17:25:35 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.552 17:25:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:08:38.552 ************************************ 00:08:38.552 END TEST denied 00:08:38.552 ************************************ 00:08:38.552 17:25:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:38.552 17:25:35 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.552 17:25:35 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.552 17:25:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:38.552 ************************************ 00:08:38.552 START TEST allowed 00:08:38.552 ************************************ 00:08:38.552 17:25:35 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:08:38.552 17:25:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:08:38.552 17:25:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:08:38.552 17:25:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:08:38.552 17:25:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.552 17:25:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:08:45.126 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:45.126 17:25:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:08:45.126 17:25:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:08:45.126 17:25:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:08:45.126 17:25:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:45.126 17:25:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:08:49.324 00:08:49.324 real 0m10.267s 00:08:49.324 user 0m2.485s 00:08:49.324 sys 0m4.670s 00:08:49.324 17:25:45 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.324 17:25:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 ************************************ 00:08:49.324 END TEST allowed 00:08:49.324 ************************************ 00:08:49.324 00:08:49.324 real 0m25.611s 00:08:49.324 user 0m7.851s 00:08:49.324 sys 0m14.246s 00:08:49.324 17:25:45 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.324 17:25:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 ************************************ 00:08:49.324 END TEST acl 00:08:49.324 ************************************ 00:08:49.324 17:25:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:08:49.324 17:25:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.324 17:25:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.324 17:25:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:49.324 ************************************ 00:08:49.324 START TEST hugepages 00:08:49.324 ************************************ 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:08:49.324 * Looking for test storage... 00:08:49.324 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.324 17:25:45 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.324 --rc genhtml_branch_coverage=1 00:08:49.324 --rc genhtml_function_coverage=1 00:08:49.324 --rc genhtml_legend=1 00:08:49.324 --rc geninfo_all_blocks=1 00:08:49.324 --rc geninfo_unexecuted_blocks=1 00:08:49.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:49.324 ' 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.324 --rc genhtml_branch_coverage=1 00:08:49.324 --rc genhtml_function_coverage=1 00:08:49.324 --rc genhtml_legend=1 00:08:49.324 --rc geninfo_all_blocks=1 00:08:49.324 --rc geninfo_unexecuted_blocks=1 00:08:49.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:49.324 ' 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.324 --rc genhtml_branch_coverage=1 00:08:49.324 --rc genhtml_function_coverage=1 00:08:49.324 --rc genhtml_legend=1 00:08:49.324 --rc geninfo_all_blocks=1 00:08:49.324 --rc geninfo_unexecuted_blocks=1 00:08:49.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:49.324 ' 00:08:49.324 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.324 --rc genhtml_branch_coverage=1 00:08:49.324 --rc genhtml_function_coverage=1 00:08:49.324 --rc genhtml_legend=1 00:08:49.324 --rc geninfo_all_blocks=1 00:08:49.324 --rc geninfo_unexecuted_blocks=1 00:08:49.324 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:49.324 ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 74177640 kB' 'MemAvailable: 77813360 kB' 'Buffers: 9752 kB' 'Cached: 11654540 kB' 'SwapCached: 0 kB' 'Active: 8602856 kB' 'Inactive: 3709176 kB' 'Active(anon): 8117812 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651188 kB' 'Mapped: 182016 kB' 'Shmem: 7470072 kB' 'KReclaimable: 190184 kB' 'Slab: 626860 kB' 'SReclaimable: 190184 kB' 'SUnreclaim: 436676 kB' 'KernelStack: 16320 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434212 kB' 'Committed_AS: 9349436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198960 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.324 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.325 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:08:49.326 17:25:45 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:08:49.326 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.326 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.326 17:25:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:49.326 ************************************ 00:08:49.326 START TEST single_node_setup 00:08:49.326 ************************************ 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:08:49.326 17:25:45 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:52.617 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:52.617 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:55.910 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76348988 kB' 'MemAvailable: 79984528 kB' 'Buffers: 9752 kB' 'Cached: 11654680 kB' 'SwapCached: 0 kB' 'Active: 8603188 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118144 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651364 kB' 'Mapped: 182044 kB' 'Shmem: 7470212 kB' 'KReclaimable: 189824 kB' 'Slab: 625636 kB' 'SReclaimable: 189824 kB' 'SUnreclaim: 435812 kB' 'KernelStack: 16160 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.910 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76349280 kB' 'MemAvailable: 79984816 kB' 'Buffers: 9752 kB' 'Cached: 11654680 kB' 'SwapCached: 0 kB' 'Active: 8603260 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118216 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651320 kB' 'Mapped: 181588 kB' 'Shmem: 7470212 kB' 'KReclaimable: 189816 kB' 'Slab: 625612 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 435796 kB' 'KernelStack: 16096 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9351632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198928 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.911 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.912 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76349136 kB' 'MemAvailable: 79984672 kB' 'Buffers: 9752 kB' 'Cached: 11654696 kB' 'SwapCached: 0 kB' 'Active: 8603240 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118196 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651284 kB' 'Mapped: 181588 kB' 'Shmem: 7470228 kB' 'KReclaimable: 189816 kB' 'Slab: 625612 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 435796 kB' 'KernelStack: 16112 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9351652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.913 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.914 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:08:55.915 nr_hugepages=1024 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:08:55.915 resv_hugepages=0 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:08:55.915 surplus_hugepages=0 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:08:55.915 anon_hugepages=0 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76348332 kB' 'MemAvailable: 79983868 kB' 'Buffers: 9752 kB' 'Cached: 11654724 kB' 'SwapCached: 0 kB' 'Active: 8603204 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118160 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651220 kB' 'Mapped: 181588 kB' 'Shmem: 7470256 kB' 'KReclaimable: 189816 kB' 'Slab: 625612 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 435796 kB' 'KernelStack: 16240 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9351676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199008 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.915 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:55.916 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 42600756 kB' 'MemUsed: 5513248 kB' 'SwapCached: 0 kB' 'Active: 2585352 kB' 'Inactive: 102684 kB' 'Active(anon): 2388972 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2403488 kB' 'Mapped: 85644 kB' 'AnonPages: 287600 kB' 'Shmem: 2104424 kB' 'KernelStack: 9608 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87324 kB' 'Slab: 326756 kB' 'SReclaimable: 87324 kB' 'SUnreclaim: 239432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.917 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:08:55.918 node0=1024 expecting 1024 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:08:55.918 00:08:55.918 real 0m6.753s 00:08:55.918 user 0m1.396s 00:08:55.918 sys 0m2.330s 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.918 17:25:52 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 ************************************ 00:08:55.918 END TEST single_node_setup 00:08:55.918 ************************************ 00:08:55.918 17:25:52 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:08:55.918 17:25:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.918 17:25:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.918 17:25:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 ************************************ 00:08:55.918 START TEST even_2G_alloc 00:08:55.918 ************************************ 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:55.918 17:25:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:59.215 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:08:59.215 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:08:59.215 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:59.215 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76350184 kB' 'MemAvailable: 79985720 kB' 'Buffers: 9752 kB' 'Cached: 11654824 kB' 'SwapCached: 0 kB' 'Active: 8603448 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118404 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651248 kB' 'Mapped: 180796 kB' 'Shmem: 7470356 kB' 'KReclaimable: 189816 kB' 'Slab: 626012 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436196 kB' 'KernelStack: 16080 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9342744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199040 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.216 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76349704 kB' 'MemAvailable: 79985240 kB' 'Buffers: 9752 kB' 'Cached: 11654828 kB' 'SwapCached: 0 kB' 'Active: 8603164 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118120 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650972 kB' 'Mapped: 180716 kB' 'Shmem: 7470360 kB' 'KReclaimable: 189816 kB' 'Slab: 626020 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436204 kB' 'KernelStack: 16048 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9342760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.217 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.218 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76349452 kB' 'MemAvailable: 79984988 kB' 'Buffers: 9752 kB' 'Cached: 11654848 kB' 'SwapCached: 0 kB' 'Active: 8603076 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118032 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650852 kB' 'Mapped: 180716 kB' 'Shmem: 7470380 kB' 'KReclaimable: 189816 kB' 'Slab: 626020 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436204 kB' 'KernelStack: 16032 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9350352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199008 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.219 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.220 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:08:59.221 nr_hugepages=1024 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:08:59.221 resv_hugepages=0 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:08:59.221 surplus_hugepages=0 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:08:59.221 anon_hugepages=0 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.221 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76348960 kB' 'MemAvailable: 79984496 kB' 'Buffers: 9752 kB' 'Cached: 11654868 kB' 'SwapCached: 0 kB' 'Active: 8603184 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118140 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650940 kB' 'Mapped: 180716 kB' 'Shmem: 7470400 kB' 'KReclaimable: 189816 kB' 'Slab: 626012 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436196 kB' 'KernelStack: 16016 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9342436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.222 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.223 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 43639560 kB' 'MemUsed: 4474444 kB' 'SwapCached: 0 kB' 'Active: 2585636 kB' 'Inactive: 102684 kB' 'Active(anon): 2389256 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2403520 kB' 'Mapped: 85184 kB' 'AnonPages: 287908 kB' 'Shmem: 2104456 kB' 'KernelStack: 9448 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87324 kB' 'Slab: 327080 kB' 'SReclaimable: 87324 kB' 'SUnreclaim: 239756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.224 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171516 kB' 'MemFree: 32709224 kB' 'MemUsed: 11462292 kB' 'SwapCached: 0 kB' 'Active: 6017664 kB' 'Inactive: 3606492 kB' 'Active(anon): 5729000 kB' 'Inactive(anon): 0 kB' 'Active(file): 288664 kB' 'Inactive(file): 3606492 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9261144 kB' 'Mapped: 95532 kB' 'AnonPages: 363052 kB' 'Shmem: 5365988 kB' 'KernelStack: 6568 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102492 kB' 'Slab: 298932 kB' 'SReclaimable: 102492 kB' 'SUnreclaim: 196440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.225 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:08:59.226 node0=512 expecting 512 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:08:59.226 node1=512 expecting 512 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:08:59.226 00:08:59.226 real 0m3.417s 00:08:59.226 user 0m1.359s 00:08:59.226 sys 0m2.150s 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.226 17:25:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:59.226 ************************************ 00:08:59.226 END TEST even_2G_alloc 00:08:59.226 ************************************ 00:08:59.226 17:25:56 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:08:59.226 17:25:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.226 17:25:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.226 17:25:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:59.486 ************************************ 00:08:59.486 START TEST odd_alloc 00:08:59.486 ************************************ 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:59.486 17:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:02.788 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:02.788 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:02.788 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76365180 kB' 'MemAvailable: 80000716 kB' 'Buffers: 9752 kB' 'Cached: 11654980 kB' 'SwapCached: 0 kB' 'Active: 8604932 kB' 'Inactive: 3709176 kB' 'Active(anon): 8119888 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 652604 kB' 'Mapped: 180880 kB' 'Shmem: 7470512 kB' 'KReclaimable: 189816 kB' 'Slab: 626128 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436312 kB' 'KernelStack: 16224 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9343420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199248 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.788 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76365424 kB' 'MemAvailable: 80000960 kB' 'Buffers: 9752 kB' 'Cached: 11654984 kB' 'SwapCached: 0 kB' 'Active: 8603700 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118656 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651412 kB' 'Mapped: 180728 kB' 'Shmem: 7470516 kB' 'KReclaimable: 189816 kB' 'Slab: 626100 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436284 kB' 'KernelStack: 16048 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9343436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199136 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.789 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.790 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76365424 kB' 'MemAvailable: 80000960 kB' 'Buffers: 9752 kB' 'Cached: 11654984 kB' 'SwapCached: 0 kB' 'Active: 8603740 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118696 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651444 kB' 'Mapped: 180728 kB' 'Shmem: 7470516 kB' 'KReclaimable: 189816 kB' 'Slab: 626100 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436284 kB' 'KernelStack: 16064 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9343456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199120 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.791 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.792 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:09:02.793 nr_hugepages=1025 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:09:02.793 resv_hugepages=0 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:09:02.793 surplus_hugepages=0 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:09:02.793 anon_hugepages=0 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76365480 kB' 'MemAvailable: 80001016 kB' 'Buffers: 9752 kB' 'Cached: 11655024 kB' 'SwapCached: 0 kB' 'Active: 8603344 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118300 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650980 kB' 'Mapped: 180728 kB' 'Shmem: 7470556 kB' 'KReclaimable: 189816 kB' 'Slab: 626100 kB' 'SReclaimable: 189816 kB' 'SUnreclaim: 436284 kB' 'KernelStack: 16032 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481764 kB' 'Committed_AS: 9343476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199120 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.793 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.794 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 43645376 kB' 'MemUsed: 4468628 kB' 'SwapCached: 0 kB' 'Active: 2586296 kB' 'Inactive: 102684 kB' 'Active(anon): 2389916 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2403556 kB' 'Mapped: 85196 kB' 'AnonPages: 288540 kB' 'Shmem: 2104492 kB' 'KernelStack: 9480 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87324 kB' 'Slab: 327212 kB' 'SReclaimable: 87324 kB' 'SUnreclaim: 239888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.795 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171516 kB' 'MemFree: 32719752 kB' 'MemUsed: 11451764 kB' 'SwapCached: 0 kB' 'Active: 6017768 kB' 'Inactive: 3606492 kB' 'Active(anon): 5729104 kB' 'Inactive(anon): 0 kB' 'Active(file): 288664 kB' 'Inactive(file): 3606492 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9261224 kB' 'Mapped: 95532 kB' 'AnonPages: 363148 kB' 'Shmem: 5366068 kB' 'KernelStack: 6568 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102492 kB' 'Slab: 298888 kB' 'SReclaimable: 102492 kB' 'SUnreclaim: 196396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.796 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:09:02.797 node0=513 expecting 513 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:09:02.797 node1=512 expecting 512 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:09:02.797 00:09:02.797 real 0m3.388s 00:09:02.797 user 0m1.282s 00:09:02.797 sys 0m2.195s 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.797 17:25:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:09:02.797 ************************************ 00:09:02.797 END TEST odd_alloc 00:09:02.797 ************************************ 00:09:02.797 17:25:59 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:09:02.797 17:25:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.797 17:25:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.797 17:25:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:09:02.797 ************************************ 00:09:02.797 START TEST custom_alloc 00:09:02.797 ************************************ 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:09:02.797 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:09:02.798 17:25:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:06.095 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:06.095 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:06.095 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75332332 kB' 'MemAvailable: 78967852 kB' 'Buffers: 9752 kB' 'Cached: 11655140 kB' 'SwapCached: 0 kB' 'Active: 8604028 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118984 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651736 kB' 'Mapped: 180784 kB' 'Shmem: 7470672 kB' 'KReclaimable: 189784 kB' 'Slab: 625816 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436032 kB' 'KernelStack: 16048 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9343840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.095 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.096 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75332936 kB' 'MemAvailable: 78968456 kB' 'Buffers: 9752 kB' 'Cached: 11655140 kB' 'SwapCached: 0 kB' 'Active: 8603380 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118336 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651104 kB' 'Mapped: 180748 kB' 'Shmem: 7470672 kB' 'KReclaimable: 189784 kB' 'Slab: 625800 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436016 kB' 'KernelStack: 16048 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9343856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.097 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75332180 kB' 'MemAvailable: 78967700 kB' 'Buffers: 9752 kB' 'Cached: 11655156 kB' 'SwapCached: 0 kB' 'Active: 8603456 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118412 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651108 kB' 'Mapped: 180748 kB' 'Shmem: 7470688 kB' 'KReclaimable: 189784 kB' 'Slab: 625800 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436016 kB' 'KernelStack: 16048 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9343876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199008 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.098 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.099 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:09:06.100 nr_hugepages=1536 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:09:06.100 resv_hugepages=0 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:09:06.100 surplus_hugepages=0 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:09:06.100 anon_hugepages=0 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 75331928 kB' 'MemAvailable: 78967448 kB' 'Buffers: 9752 kB' 'Cached: 11655192 kB' 'SwapCached: 0 kB' 'Active: 8603496 kB' 'Inactive: 3709176 kB' 'Active(anon): 8118452 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651076 kB' 'Mapped: 180748 kB' 'Shmem: 7470724 kB' 'KReclaimable: 189784 kB' 'Slab: 625800 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436016 kB' 'KernelStack: 16048 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958500 kB' 'Committed_AS: 9343900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.100 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.101 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:09:06.102 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 43637680 kB' 'MemUsed: 4476324 kB' 'SwapCached: 0 kB' 'Active: 2585120 kB' 'Inactive: 102684 kB' 'Active(anon): 2388740 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2403568 kB' 'Mapped: 85216 kB' 'AnonPages: 287396 kB' 'Shmem: 2104504 kB' 'KernelStack: 9480 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87324 kB' 'Slab: 327072 kB' 'SReclaimable: 87324 kB' 'SUnreclaim: 239748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.365 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44171516 kB' 'MemFree: 31697740 kB' 'MemUsed: 12473776 kB' 'SwapCached: 0 kB' 'Active: 6018436 kB' 'Inactive: 3606492 kB' 'Active(anon): 5729772 kB' 'Inactive(anon): 0 kB' 'Active(file): 288664 kB' 'Inactive(file): 3606492 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9261404 kB' 'Mapped: 95532 kB' 'AnonPages: 363724 kB' 'Shmem: 5366248 kB' 'KernelStack: 6568 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102460 kB' 'Slab: 298724 kB' 'SReclaimable: 102460 kB' 'SUnreclaim: 196264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.366 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.367 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:09:06.368 node0=512 expecting 512 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:09:06.368 node1=1024 expecting 1024 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:09:06.368 00:09:06.368 real 0m3.447s 00:09:06.368 user 0m1.333s 00:09:06.368 sys 0m2.208s 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.368 17:26:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:09:06.368 ************************************ 00:09:06.368 END TEST custom_alloc 00:09:06.368 ************************************ 00:09:06.368 17:26:03 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:09:06.368 17:26:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.368 17:26:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.368 17:26:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:09:06.368 ************************************ 00:09:06.368 START TEST no_shrink_alloc 00:09:06.368 ************************************ 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:09:06.368 17:26:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:09.666 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:09.666 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:09.666 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76358868 kB' 'MemAvailable: 79994388 kB' 'Buffers: 9752 kB' 'Cached: 11655292 kB' 'SwapCached: 0 kB' 'Active: 8608944 kB' 'Inactive: 3709176 kB' 'Active(anon): 8123900 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655876 kB' 'Mapped: 181360 kB' 'Shmem: 7470824 kB' 'KReclaimable: 189784 kB' 'Slab: 625944 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436160 kB' 'KernelStack: 16080 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9348904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199056 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.666 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.667 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76357192 kB' 'MemAvailable: 79992712 kB' 'Buffers: 9752 kB' 'Cached: 11655296 kB' 'SwapCached: 0 kB' 'Active: 8604688 kB' 'Inactive: 3709176 kB' 'Active(anon): 8119644 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 652208 kB' 'Mapped: 180760 kB' 'Shmem: 7470828 kB' 'KReclaimable: 189784 kB' 'Slab: 625936 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436152 kB' 'KernelStack: 16080 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9344400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199040 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.668 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.669 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76357832 kB' 'MemAvailable: 79993352 kB' 'Buffers: 9752 kB' 'Cached: 11655312 kB' 'SwapCached: 0 kB' 'Active: 8604228 kB' 'Inactive: 3709176 kB' 'Active(anon): 8119184 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651604 kB' 'Mapped: 180760 kB' 'Shmem: 7470844 kB' 'KReclaimable: 189784 kB' 'Slab: 625936 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436152 kB' 'KernelStack: 16048 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9344420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.670 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.671 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.672 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:09:09.673 nr_hugepages=1024 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:09:09.673 resv_hugepages=0 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:09:09.673 surplus_hugepages=0 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:09:09.673 anon_hugepages=0 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76358728 kB' 'MemAvailable: 79994248 kB' 'Buffers: 9752 kB' 'Cached: 11655352 kB' 'SwapCached: 0 kB' 'Active: 8604236 kB' 'Inactive: 3709176 kB' 'Active(anon): 8119192 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651532 kB' 'Mapped: 180760 kB' 'Shmem: 7470884 kB' 'KReclaimable: 189784 kB' 'Slab: 625936 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 436152 kB' 'KernelStack: 16032 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9344444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.673 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.674 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:09.675 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 42577980 kB' 'MemUsed: 5536024 kB' 'SwapCached: 0 kB' 'Active: 2587076 kB' 'Inactive: 102684 kB' 'Active(anon): 2390696 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2403608 kB' 'Mapped: 85228 kB' 'AnonPages: 289332 kB' 'Shmem: 2104544 kB' 'KernelStack: 9512 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87324 kB' 'Slab: 327132 kB' 'SReclaimable: 87324 kB' 'SUnreclaim: 239808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.676 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:09.677 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:09.678 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:09.678 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:09:09.678 node0=1024 expecting 1024 00:09:09.678 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:09:09.678 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:09:09.678 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:09:09.678 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:09:09.937 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:09:09.937 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:09:09.937 17:26:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:09:13.235 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:09:13.235 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:09:13.235 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:09:13.235 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:13.235 17:26:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:13.235 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:13.235 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:13.235 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:13.235 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:13.235 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.235 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76378420 kB' 'MemAvailable: 80013940 kB' 'Buffers: 9752 kB' 'Cached: 11655432 kB' 'SwapCached: 0 kB' 'Active: 8604876 kB' 'Inactive: 3709176 kB' 'Active(anon): 8119832 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651672 kB' 'Mapped: 180844 kB' 'Shmem: 7470964 kB' 'KReclaimable: 189784 kB' 'Slab: 625740 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 435956 kB' 'KernelStack: 16032 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9344908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199056 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.236 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76379016 kB' 'MemAvailable: 80014536 kB' 'Buffers: 9752 kB' 'Cached: 11655436 kB' 'SwapCached: 0 kB' 'Active: 8606352 kB' 'Inactive: 3709176 kB' 'Active(anon): 8121308 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 653892 kB' 'Mapped: 180768 kB' 'Shmem: 7470968 kB' 'KReclaimable: 189784 kB' 'Slab: 625716 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 435932 kB' 'KernelStack: 16048 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9344928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.237 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.238 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76379224 kB' 'MemAvailable: 80014744 kB' 'Buffers: 9752 kB' 'Cached: 11655452 kB' 'SwapCached: 0 kB' 'Active: 8606404 kB' 'Inactive: 3709176 kB' 'Active(anon): 8121360 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 653876 kB' 'Mapped: 180768 kB' 'Shmem: 7470984 kB' 'KReclaimable: 189784 kB' 'Slab: 625716 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 435932 kB' 'KernelStack: 16048 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9344948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.239 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.240 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:09:13.241 nr_hugepages=1024 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:09:13.241 resv_hugepages=0 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:09:13.241 surplus_hugepages=0 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:09:13.241 anon_hugepages=0 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285520 kB' 'MemFree: 76379224 kB' 'MemAvailable: 80014744 kB' 'Buffers: 9752 kB' 'Cached: 11655452 kB' 'SwapCached: 0 kB' 'Active: 8606752 kB' 'Inactive: 3709176 kB' 'Active(anon): 8121708 kB' 'Inactive(anon): 0 kB' 'Active(file): 485044 kB' 'Inactive(file): 3709176 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654200 kB' 'Mapped: 180768 kB' 'Shmem: 7470984 kB' 'KReclaimable: 189784 kB' 'Slab: 625716 kB' 'SReclaimable: 189784 kB' 'SUnreclaim: 435932 kB' 'KernelStack: 16080 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482788 kB' 'Committed_AS: 9346220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 46080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 468480 kB' 'DirectMap2M: 6547456 kB' 'DirectMap1G: 95420416 kB' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.241 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:09:13.242 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48114004 kB' 'MemFree: 42578076 kB' 'MemUsed: 5535928 kB' 'SwapCached: 0 kB' 'Active: 2589820 kB' 'Inactive: 102684 kB' 'Active(anon): 2393440 kB' 'Inactive(anon): 0 kB' 'Active(file): 196380 kB' 'Inactive(file): 102684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2403644 kB' 'Mapped: 85236 kB' 'AnonPages: 292180 kB' 'Shmem: 2104580 kB' 'KernelStack: 9480 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87324 kB' 'Slab: 326940 kB' 'SReclaimable: 87324 kB' 'SUnreclaim: 239616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.243 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:09:13.244 node0=1024 expecting 1024 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:09:13.244 00:09:13.244 real 0m6.833s 00:09:13.244 user 0m2.630s 00:09:13.244 sys 0m4.385s 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.244 17:26:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:09:13.244 ************************************ 00:09:13.244 END TEST no_shrink_alloc 00:09:13.244 ************************************ 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:09:13.244 17:26:10 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:09:13.244 00:09:13.244 real 0m24.514s 00:09:13.244 user 0m8.286s 00:09:13.244 sys 0m13.711s 00:09:13.244 17:26:10 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.244 17:26:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:09:13.244 ************************************ 00:09:13.244 END TEST hugepages 00:09:13.244 ************************************ 00:09:13.244 17:26:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:09:13.244 17:26:10 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.244 17:26:10 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.244 17:26:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:13.244 ************************************ 00:09:13.244 START TEST driver 00:09:13.244 ************************************ 00:09:13.244 17:26:10 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:09:13.504 * Looking for test storage... 00:09:13.504 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.504 17:26:10 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.504 --rc genhtml_branch_coverage=1 00:09:13.504 --rc genhtml_function_coverage=1 00:09:13.504 --rc genhtml_legend=1 00:09:13.504 --rc geninfo_all_blocks=1 00:09:13.504 --rc geninfo_unexecuted_blocks=1 00:09:13.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:13.504 ' 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.504 --rc genhtml_branch_coverage=1 00:09:13.504 --rc genhtml_function_coverage=1 00:09:13.504 --rc genhtml_legend=1 00:09:13.504 --rc geninfo_all_blocks=1 00:09:13.504 --rc geninfo_unexecuted_blocks=1 00:09:13.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:13.504 ' 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.504 --rc genhtml_branch_coverage=1 00:09:13.504 --rc genhtml_function_coverage=1 00:09:13.504 --rc genhtml_legend=1 00:09:13.504 --rc geninfo_all_blocks=1 00:09:13.504 --rc geninfo_unexecuted_blocks=1 00:09:13.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:13.504 ' 00:09:13.504 17:26:10 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.504 --rc genhtml_branch_coverage=1 00:09:13.504 --rc genhtml_function_coverage=1 00:09:13.504 --rc genhtml_legend=1 00:09:13.504 --rc geninfo_all_blocks=1 00:09:13.504 --rc geninfo_unexecuted_blocks=1 00:09:13.504 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:13.504 ' 00:09:13.504 17:26:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:09:13.504 17:26:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:13.504 17:26:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:18.780 17:26:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:09:18.780 17:26:15 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.780 17:26:15 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.780 17:26:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:09:18.780 ************************************ 00:09:18.780 START TEST guess_driver 00:09:18.780 ************************************ 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 160 > 0 )) 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:09:18.780 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:09:18.780 Looking for driver=vfio-pci 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:09:18.780 17:26:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.318 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.577 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.578 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.578 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.578 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:21.578 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:21.578 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:21.578 17:26:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:24.871 17:26:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:30.148 00:09:30.148 real 0m11.196s 00:09:30.148 user 0m2.562s 00:09:30.148 sys 0m4.828s 00:09:30.148 17:26:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.148 17:26:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:09:30.148 ************************************ 00:09:30.148 END TEST guess_driver 00:09:30.148 ************************************ 00:09:30.148 00:09:30.148 real 0m16.096s 00:09:30.148 user 0m4.043s 00:09:30.148 sys 0m7.492s 00:09:30.148 17:26:26 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.148 17:26:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:09:30.148 ************************************ 00:09:30.148 END TEST driver 00:09:30.148 ************************************ 00:09:30.148 17:26:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:09:30.148 17:26:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:30.148 17:26:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.148 17:26:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:30.148 ************************************ 00:09:30.148 START TEST devices 00:09:30.148 ************************************ 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:09:30.148 * Looking for test storage... 00:09:30.148 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.148 17:26:26 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.148 --rc genhtml_branch_coverage=1 00:09:30.148 --rc genhtml_function_coverage=1 00:09:30.148 --rc genhtml_legend=1 00:09:30.148 --rc geninfo_all_blocks=1 00:09:30.148 --rc geninfo_unexecuted_blocks=1 00:09:30.148 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.148 ' 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.148 --rc genhtml_branch_coverage=1 00:09:30.148 --rc genhtml_function_coverage=1 00:09:30.148 --rc genhtml_legend=1 00:09:30.148 --rc geninfo_all_blocks=1 00:09:30.148 --rc geninfo_unexecuted_blocks=1 00:09:30.148 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.148 ' 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.148 --rc genhtml_branch_coverage=1 00:09:30.148 --rc genhtml_function_coverage=1 00:09:30.148 --rc genhtml_legend=1 00:09:30.148 --rc geninfo_all_blocks=1 00:09:30.148 --rc geninfo_unexecuted_blocks=1 00:09:30.148 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.148 ' 00:09:30.148 17:26:26 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.148 --rc genhtml_branch_coverage=1 00:09:30.148 --rc genhtml_function_coverage=1 00:09:30.148 --rc genhtml_legend=1 00:09:30.148 --rc geninfo_all_blocks=1 00:09:30.148 --rc geninfo_unexecuted_blocks=1 00:09:30.148 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.148 ' 00:09:30.148 17:26:26 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:09:30.148 17:26:26 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:09:30.148 17:26:26 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:30.148 17:26:26 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:09:33.440 17:26:30 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:09:33.440 No valid GPT data, bailing 00:09:33.440 17:26:30 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:09:33.440 17:26:30 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:33.440 17:26:30 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:09:33.440 17:26:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.440 17:26:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:33.440 ************************************ 00:09:33.440 START TEST nvme_mount 00:09:33.440 ************************************ 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:33.440 17:26:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:09:34.379 Creating new GPT entries in memory. 00:09:34.379 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:34.379 other utilities. 00:09:34.379 17:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:09:34.379 17:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:34.379 17:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:34.379 17:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:34.379 17:26:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:09:35.759 Creating new GPT entries in memory. 00:09:35.759 The operation has completed successfully. 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2078028 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:35.759 17:26:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.053 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:39.054 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:39.054 17:26:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:39.054 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:09:39.054 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:09:39.054 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:39.054 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:39.054 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:09:39.054 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:09:39.054 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:39.054 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:09:39.054 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:09:39.054 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:39.314 17:26:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:42.604 17:26:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:45.897 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:45.897 00:09:45.897 real 0m12.280s 00:09:45.897 user 0m3.589s 00:09:45.897 sys 0m6.577s 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.897 17:26:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:09:45.897 ************************************ 00:09:45.897 END TEST nvme_mount 00:09:45.897 ************************************ 00:09:45.897 17:26:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:09:45.897 17:26:42 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.897 17:26:42 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.898 17:26:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:45.898 ************************************ 00:09:45.898 START TEST dm_mount 00:09:45.898 ************************************ 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:45.898 17:26:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:09:46.836 Creating new GPT entries in memory. 00:09:46.836 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:46.836 other utilities. 00:09:46.836 17:26:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:09:46.836 17:26:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:46.836 17:26:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:46.836 17:26:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:46.836 17:26:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:09:47.774 Creating new GPT entries in memory. 00:09:47.774 The operation has completed successfully. 00:09:47.774 17:26:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:47.774 17:26:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:47.774 17:26:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:47.774 17:26:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:47.774 17:26:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:09:49.154 The operation has completed successfully. 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2081761 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:09:49.154 17:26:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:49.154 17:26:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:52.534 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.534 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:09:52.534 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:09:52.534 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:09:52.535 17:26:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:09:55.828 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:09:55.828 00:09:55.828 real 0m9.736s 00:09:55.828 user 0m2.398s 00:09:55.828 sys 0m4.390s 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.828 17:26:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:09:55.828 ************************************ 00:09:55.828 END TEST dm_mount 00:09:55.828 ************************************ 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:55.828 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:09:55.828 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:09:55.828 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:55.828 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:09:55.828 17:26:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:09:55.828 00:09:55.828 real 0m26.365s 00:09:55.828 user 0m7.472s 00:09:55.828 sys 0m13.761s 00:09:55.828 17:26:52 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.828 17:26:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:09:55.828 ************************************ 00:09:55.828 END TEST devices 00:09:55.828 ************************************ 00:09:55.828 00:09:55.828 real 1m33.142s 00:09:55.828 user 0m27.898s 00:09:55.828 sys 0m49.563s 00:09:55.828 17:26:52 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.828 17:26:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:55.828 ************************************ 00:09:55.828 END TEST setup.sh 00:09:55.828 ************************************ 00:09:56.088 17:26:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:09:59.382 Hugepages 00:09:59.382 node hugesize free / total 00:09:59.382 node0 1048576kB 0 / 0 00:09:59.382 node0 2048kB 1024 / 1024 00:09:59.382 node1 1048576kB 0 / 0 00:09:59.382 node1 2048kB 1024 / 1024 00:09:59.382 00:09:59.382 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:59.382 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:09:59.382 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:09:59.382 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:09:59.382 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:09:59.382 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:09:59.382 17:26:56 -- spdk/autotest.sh@117 -- # uname -s 00:09:59.382 17:26:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:59.382 17:26:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:59.382 17:26:56 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:02.676 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:02.676 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:05.968 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:10:05.968 17:27:02 -- common/autotest_common.sh@1515 -- # sleep 1 00:10:06.907 17:27:03 -- common/autotest_common.sh@1516 -- # bdfs=() 00:10:06.907 17:27:03 -- common/autotest_common.sh@1516 -- # local bdfs 00:10:06.907 17:27:03 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.907 17:27:03 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:10:06.907 17:27:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:06.907 17:27:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:06.907 17:27:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.907 17:27:03 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:06.907 17:27:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:06.907 17:27:03 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:10:06.907 17:27:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:10:06.907 17:27:03 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:10:10.198 Waiting for block devices as requested 00:10:10.198 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:10:10.198 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:10.198 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:10.198 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:10.458 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:10.458 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:10.458 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:10.718 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:10.718 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:10.718 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:10.978 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:10.978 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:10.978 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:11.237 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:11.237 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:11.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:11.497 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:11.497 17:27:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:10:11.497 17:27:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:10:11.497 17:27:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:10:11.497 17:27:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:10:11.497 17:27:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:10:11.497 17:27:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:10:11.497 17:27:08 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:10:11.497 17:27:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:10:11.497 17:27:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:10:11.497 17:27:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:10:11.497 17:27:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:10:11.497 17:27:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:10:11.497 17:27:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:10:11.497 17:27:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:10:11.497 17:27:08 -- common/autotest_common.sh@1541 -- # continue 00:10:11.497 17:27:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:11.497 17:27:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.497 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 17:27:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:11.756 17:27:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.756 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 17:27:08 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:15.047 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:15.047 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:18.340 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:10:18.340 17:27:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:18.340 17:27:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:18.340 17:27:15 -- common/autotest_common.sh@10 -- # set +x 00:10:18.340 17:27:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:18.340 17:27:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:10:18.340 17:27:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:10:18.340 17:27:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:10:18.340 17:27:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:10:18.340 17:27:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:10:18.340 17:27:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:10:18.340 17:27:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:10:18.340 17:27:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:18.340 17:27:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:10:18.340 17:27:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:18.340 17:27:15 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:18.340 17:27:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:18.340 17:27:15 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:10:18.340 17:27:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:10:18.340 17:27:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:10:18.340 17:27:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:10:18.340 17:27:15 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:10:18.340 17:27:15 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:10:18.340 17:27:15 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:10:18.340 17:27:15 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:10:18.340 17:27:15 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:10:18.340 17:27:15 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:10:18.340 17:27:15 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2090418 00:10:18.340 17:27:15 -- common/autotest_common.sh@1583 -- # waitforlisten 2090418 00:10:18.340 17:27:15 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:18.340 17:27:15 -- common/autotest_common.sh@831 -- # '[' -z 2090418 ']' 00:10:18.340 17:27:15 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.340 17:27:15 -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.340 17:27:15 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.340 17:27:15 -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.340 17:27:15 -- common/autotest_common.sh@10 -- # set +x 00:10:18.341 [2024-10-14 17:27:15.349918] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:18.341 [2024-10-14 17:27:15.349990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090418 ] 00:10:18.601 [2024-10-14 17:27:15.432720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.601 [2024-10-14 17:27:15.482498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.861 17:27:15 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.861 17:27:15 -- common/autotest_common.sh@864 -- # return 0 00:10:18.861 17:27:15 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:10:18.861 17:27:15 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:10:18.861 17:27:15 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:10:22.160 nvme0n1 00:10:22.160 17:27:18 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:10:22.160 [2024-10-14 17:27:18.902111] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:10:22.160 request: 00:10:22.160 { 00:10:22.160 "nvme_ctrlr_name": "nvme0", 00:10:22.160 "password": "test", 00:10:22.160 "method": "bdev_nvme_opal_revert", 00:10:22.160 "req_id": 1 00:10:22.160 } 00:10:22.160 Got JSON-RPC error response 00:10:22.160 response: 00:10:22.160 { 00:10:22.160 "code": -32602, 00:10:22.160 "message": "Invalid parameters" 00:10:22.160 } 00:10:22.160 17:27:18 -- common/autotest_common.sh@1589 -- # true 00:10:22.160 17:27:18 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:10:22.160 17:27:18 -- common/autotest_common.sh@1593 -- # killprocess 2090418 00:10:22.160 17:27:18 -- common/autotest_common.sh@950 -- # '[' -z 2090418 ']' 00:10:22.160 17:27:18 -- common/autotest_common.sh@954 -- # kill -0 2090418 00:10:22.160 17:27:18 -- common/autotest_common.sh@955 -- # uname 00:10:22.160 17:27:18 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.160 17:27:18 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090418 00:10:22.160 17:27:18 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.160 17:27:18 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.160 17:27:18 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090418' 00:10:22.160 killing process with pid 2090418 00:10:22.160 17:27:18 -- common/autotest_common.sh@969 -- # kill 2090418 00:10:22.160 17:27:18 -- common/autotest_common.sh@974 -- # wait 2090418 00:10:26.352 17:27:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:26.352 17:27:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:26.352 17:27:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:26.352 17:27:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:26.352 17:27:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:26.352 17:27:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:26.352 17:27:22 -- common/autotest_common.sh@10 -- # set +x 00:10:26.352 17:27:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:26.352 17:27:22 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:10:26.352 17:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.352 17:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.352 17:27:22 -- common/autotest_common.sh@10 -- # set +x 00:10:26.352 ************************************ 00:10:26.352 START TEST env 00:10:26.352 ************************************ 00:10:26.352 17:27:22 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:10:26.352 * Looking for test storage... 00:10:26.352 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:10:26.352 17:27:23 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:26.352 17:27:23 env -- common/autotest_common.sh@1691 -- # lcov --version 00:10:26.352 17:27:23 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:26.352 17:27:23 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:26.352 17:27:23 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.352 17:27:23 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.352 17:27:23 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.352 17:27:23 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.352 17:27:23 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.352 17:27:23 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.352 17:27:23 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.352 17:27:23 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.352 17:27:23 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.352 17:27:23 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.353 17:27:23 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.353 17:27:23 env -- scripts/common.sh@344 -- # case "$op" in 00:10:26.353 17:27:23 env -- scripts/common.sh@345 -- # : 1 00:10:26.353 17:27:23 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.353 17:27:23 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.353 17:27:23 env -- scripts/common.sh@365 -- # decimal 1 00:10:26.353 17:27:23 env -- scripts/common.sh@353 -- # local d=1 00:10:26.353 17:27:23 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.353 17:27:23 env -- scripts/common.sh@355 -- # echo 1 00:10:26.353 17:27:23 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.353 17:27:23 env -- scripts/common.sh@366 -- # decimal 2 00:10:26.353 17:27:23 env -- scripts/common.sh@353 -- # local d=2 00:10:26.353 17:27:23 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.353 17:27:23 env -- scripts/common.sh@355 -- # echo 2 00:10:26.353 17:27:23 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.353 17:27:23 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.353 17:27:23 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.353 17:27:23 env -- scripts/common.sh@368 -- # return 0 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.353 ' 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.353 ' 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.353 ' 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.353 --rc genhtml_branch_coverage=1 00:10:26.353 --rc genhtml_function_coverage=1 00:10:26.353 --rc genhtml_legend=1 00:10:26.353 --rc geninfo_all_blocks=1 00:10:26.353 --rc geninfo_unexecuted_blocks=1 00:10:26.353 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.353 ' 00:10:26.353 17:27:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.353 17:27:23 env -- common/autotest_common.sh@10 -- # set +x 00:10:26.353 ************************************ 00:10:26.353 START TEST env_memory 00:10:26.353 ************************************ 00:10:26.353 17:27:23 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:10:26.353 00:10:26.353 00:10:26.353 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.353 http://cunit.sourceforge.net/ 00:10:26.353 00:10:26.353 00:10:26.353 Suite: memory 00:10:26.353 Test: alloc and free memory map ...[2024-10-14 17:27:23.243109] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:26.353 passed 00:10:26.353 Test: mem map translation ...[2024-10-14 17:27:23.256570] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:26.353 [2024-10-14 17:27:23.256588] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:26.353 [2024-10-14 17:27:23.256622] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:26.353 [2024-10-14 17:27:23.256631] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:26.353 passed 00:10:26.353 Test: mem map registration ...[2024-10-14 17:27:23.278658] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:26.353 [2024-10-14 17:27:23.278675] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:26.353 passed 00:10:26.353 Test: mem map adjacent registrations ...passed 00:10:26.353 00:10:26.353 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.353 suites 1 1 n/a 0 0 00:10:26.353 tests 4 4 4 0 0 00:10:26.353 asserts 152 152 152 0 n/a 00:10:26.353 00:10:26.353 Elapsed time = 0.088 seconds 00:10:26.353 00:10:26.353 real 0m0.101s 00:10:26.353 user 0m0.085s 00:10:26.353 sys 0m0.016s 00:10:26.353 17:27:23 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.353 17:27:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:26.353 ************************************ 00:10:26.353 END TEST env_memory 00:10:26.353 ************************************ 00:10:26.353 17:27:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.353 17:27:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.353 17:27:23 env -- common/autotest_common.sh@10 -- # set +x 00:10:26.353 ************************************ 00:10:26.353 START TEST env_vtophys 00:10:26.353 ************************************ 00:10:26.353 17:27:23 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:26.353 EAL: lib.eal log level changed from notice to debug 00:10:26.353 EAL: Detected lcore 0 as core 0 on socket 0 00:10:26.353 EAL: Detected lcore 1 as core 1 on socket 0 00:10:26.353 EAL: Detected lcore 2 as core 2 on socket 0 00:10:26.353 EAL: Detected lcore 3 as core 3 on socket 0 00:10:26.353 EAL: Detected lcore 4 as core 4 on socket 0 00:10:26.353 EAL: Detected lcore 5 as core 8 on socket 0 00:10:26.353 EAL: Detected lcore 6 as core 9 on socket 0 00:10:26.353 EAL: Detected lcore 7 as core 10 on socket 0 00:10:26.353 EAL: Detected lcore 8 as core 11 on socket 0 00:10:26.353 EAL: Detected lcore 9 as core 16 on socket 0 00:10:26.353 EAL: Detected lcore 10 as core 17 on socket 0 00:10:26.353 EAL: Detected lcore 11 as core 18 on socket 0 00:10:26.353 EAL: Detected lcore 12 as core 19 on socket 0 00:10:26.353 EAL: Detected lcore 13 as core 20 on socket 0 00:10:26.353 EAL: Detected lcore 14 as core 24 on socket 0 00:10:26.353 EAL: Detected lcore 15 as core 25 on socket 0 00:10:26.353 EAL: Detected lcore 16 as core 26 on socket 0 00:10:26.353 EAL: Detected lcore 17 as core 27 on socket 0 00:10:26.353 EAL: Detected lcore 18 as core 0 on socket 1 00:10:26.353 EAL: Detected lcore 19 as core 1 on socket 1 00:10:26.353 EAL: Detected lcore 20 as core 2 on socket 1 00:10:26.353 EAL: Detected lcore 21 as core 3 on socket 1 00:10:26.353 EAL: Detected lcore 22 as core 4 on socket 1 00:10:26.353 EAL: Detected lcore 23 as core 8 on socket 1 00:10:26.353 EAL: Detected lcore 24 as core 9 on socket 1 00:10:26.353 EAL: Detected lcore 25 as core 10 on socket 1 00:10:26.353 EAL: Detected lcore 26 as core 11 on socket 1 00:10:26.353 EAL: Detected lcore 27 as core 16 on socket 1 00:10:26.353 EAL: Detected lcore 28 as core 17 on socket 1 00:10:26.353 EAL: Detected lcore 29 as core 18 on socket 1 00:10:26.353 EAL: Detected lcore 30 as core 19 on socket 1 00:10:26.353 EAL: Detected lcore 31 as core 20 on socket 1 00:10:26.353 EAL: Detected lcore 32 as core 24 on socket 1 00:10:26.353 EAL: Detected lcore 33 as core 25 on socket 1 00:10:26.353 EAL: Detected lcore 34 as core 26 on socket 1 00:10:26.353 EAL: Detected lcore 35 as core 27 on socket 1 00:10:26.353 EAL: Detected lcore 36 as core 0 on socket 0 00:10:26.353 EAL: Detected lcore 37 as core 1 on socket 0 00:10:26.353 EAL: Detected lcore 38 as core 2 on socket 0 00:10:26.353 EAL: Detected lcore 39 as core 3 on socket 0 00:10:26.353 EAL: Detected lcore 40 as core 4 on socket 0 00:10:26.353 EAL: Detected lcore 41 as core 8 on socket 0 00:10:26.353 EAL: Detected lcore 42 as core 9 on socket 0 00:10:26.353 EAL: Detected lcore 43 as core 10 on socket 0 00:10:26.353 EAL: Detected lcore 44 as core 11 on socket 0 00:10:26.353 EAL: Detected lcore 45 as core 16 on socket 0 00:10:26.353 EAL: Detected lcore 46 as core 17 on socket 0 00:10:26.353 EAL: Detected lcore 47 as core 18 on socket 0 00:10:26.353 EAL: Detected lcore 48 as core 19 on socket 0 00:10:26.353 EAL: Detected lcore 49 as core 20 on socket 0 00:10:26.353 EAL: Detected lcore 50 as core 24 on socket 0 00:10:26.353 EAL: Detected lcore 51 as core 25 on socket 0 00:10:26.353 EAL: Detected lcore 52 as core 26 on socket 0 00:10:26.353 EAL: Detected lcore 53 as core 27 on socket 0 00:10:26.353 EAL: Detected lcore 54 as core 0 on socket 1 00:10:26.353 EAL: Detected lcore 55 as core 1 on socket 1 00:10:26.353 EAL: Detected lcore 56 as core 2 on socket 1 00:10:26.353 EAL: Detected lcore 57 as core 3 on socket 1 00:10:26.353 EAL: Detected lcore 58 as core 4 on socket 1 00:10:26.353 EAL: Detected lcore 59 as core 8 on socket 1 00:10:26.353 EAL: Detected lcore 60 as core 9 on socket 1 00:10:26.353 EAL: Detected lcore 61 as core 10 on socket 1 00:10:26.353 EAL: Detected lcore 62 as core 11 on socket 1 00:10:26.353 EAL: Detected lcore 63 as core 16 on socket 1 00:10:26.353 EAL: Detected lcore 64 as core 17 on socket 1 00:10:26.353 EAL: Detected lcore 65 as core 18 on socket 1 00:10:26.353 EAL: Detected lcore 66 as core 19 on socket 1 00:10:26.353 EAL: Detected lcore 67 as core 20 on socket 1 00:10:26.353 EAL: Detected lcore 68 as core 24 on socket 1 00:10:26.353 EAL: Detected lcore 69 as core 25 on socket 1 00:10:26.353 EAL: Detected lcore 70 as core 26 on socket 1 00:10:26.353 EAL: Detected lcore 71 as core 27 on socket 1 00:10:26.353 EAL: Maximum logical cores by configuration: 128 00:10:26.354 EAL: Detected CPU lcores: 72 00:10:26.354 EAL: Detected NUMA nodes: 2 00:10:26.354 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:26.354 EAL: Checking presence of .so 'librte_eal.so.24' 00:10:26.354 EAL: Checking presence of .so 'librte_eal.so' 00:10:26.354 EAL: Detected static linkage of DPDK 00:10:26.354 EAL: No shared files mode enabled, IPC will be disabled 00:10:26.613 EAL: Bus pci wants IOVA as 'DC' 00:10:26.613 EAL: Buses did not request a specific IOVA mode. 00:10:26.613 EAL: IOMMU is available, selecting IOVA as VA mode. 00:10:26.613 EAL: Selected IOVA mode 'VA' 00:10:26.613 EAL: Probing VFIO support... 00:10:26.613 EAL: IOMMU type 1 (Type 1) is supported 00:10:26.613 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:26.613 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:26.613 EAL: VFIO support initialized 00:10:26.613 EAL: Ask a virtual area of 0x2e000 bytes 00:10:26.613 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:26.613 EAL: Setting up physically contiguous memory... 00:10:26.613 EAL: Setting maximum number of open files to 524288 00:10:26.613 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:26.613 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:10:26.613 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:26.613 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.613 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:26.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:26.613 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.613 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:26.613 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:26.613 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.613 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:26.613 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:26.613 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.613 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:26.614 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.614 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:26.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:26.614 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.614 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:26.614 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.614 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:26.614 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:26.614 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.614 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:26.614 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:10:26.614 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.614 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:10:26.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:26.614 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.614 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:10:26.614 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.614 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:10:26.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:26.614 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.614 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:10:26.614 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.614 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:10:26.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:26.614 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.614 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:10:26.614 EAL: Ask a virtual area of 0x61000 bytes 00:10:26.614 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:10:26.614 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:26.614 EAL: Ask a virtual area of 0x400000000 bytes 00:10:26.614 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:10:26.614 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:10:26.614 EAL: Hugepages will be freed exactly as allocated. 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: TSC frequency is ~2300000 KHz 00:10:26.614 EAL: Main lcore 0 is ready (tid=7f16ce9f6a00;cpuset=[0]) 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 0 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 2MB 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Mem event callback 'spdk:(nil)' registered 00:10:26.614 00:10:26.614 00:10:26.614 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.614 http://cunit.sourceforge.net/ 00:10:26.614 00:10:26.614 00:10:26.614 Suite: components_suite 00:10:26.614 Test: vtophys_malloc_test ...passed 00:10:26.614 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 4MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 4MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 6MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 6MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 10MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 10MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 18MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 18MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 34MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 34MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 66MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 66MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 130MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was shrunk by 130MB 00:10:26.614 EAL: Trying to obtain current memory policy. 00:10:26.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.614 EAL: Restoring previous memory policy: 4 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.614 EAL: request: mp_malloc_sync 00:10:26.614 EAL: No shared files mode enabled, IPC is disabled 00:10:26.614 EAL: Heap on socket 0 was expanded by 258MB 00:10:26.614 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.874 EAL: request: mp_malloc_sync 00:10:26.874 EAL: No shared files mode enabled, IPC is disabled 00:10:26.874 EAL: Heap on socket 0 was shrunk by 258MB 00:10:26.874 EAL: Trying to obtain current memory policy. 00:10:26.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:26.874 EAL: Restoring previous memory policy: 4 00:10:26.874 EAL: Calling mem event callback 'spdk:(nil)' 00:10:26.874 EAL: request: mp_malloc_sync 00:10:26.874 EAL: No shared files mode enabled, IPC is disabled 00:10:26.874 EAL: Heap on socket 0 was expanded by 514MB 00:10:26.874 EAL: Calling mem event callback 'spdk:(nil)' 00:10:27.133 EAL: request: mp_malloc_sync 00:10:27.133 EAL: No shared files mode enabled, IPC is disabled 00:10:27.133 EAL: Heap on socket 0 was shrunk by 514MB 00:10:27.133 EAL: Trying to obtain current memory policy. 00:10:27.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:27.133 EAL: Restoring previous memory policy: 4 00:10:27.133 EAL: Calling mem event callback 'spdk:(nil)' 00:10:27.133 EAL: request: mp_malloc_sync 00:10:27.133 EAL: No shared files mode enabled, IPC is disabled 00:10:27.133 EAL: Heap on socket 0 was expanded by 1026MB 00:10:27.392 EAL: Calling mem event callback 'spdk:(nil)' 00:10:27.652 EAL: request: mp_malloc_sync 00:10:27.652 EAL: No shared files mode enabled, IPC is disabled 00:10:27.652 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:27.652 passed 00:10:27.652 00:10:27.652 Run Summary: Type Total Ran Passed Failed Inactive 00:10:27.652 suites 1 1 n/a 0 0 00:10:27.652 tests 2 2 2 0 0 00:10:27.652 asserts 497 497 497 0 n/a 00:10:27.652 00:10:27.652 Elapsed time = 0.989 seconds 00:10:27.652 EAL: Calling mem event callback 'spdk:(nil)' 00:10:27.652 EAL: request: mp_malloc_sync 00:10:27.652 EAL: No shared files mode enabled, IPC is disabled 00:10:27.652 EAL: Heap on socket 0 was shrunk by 2MB 00:10:27.652 EAL: No shared files mode enabled, IPC is disabled 00:10:27.652 EAL: No shared files mode enabled, IPC is disabled 00:10:27.652 EAL: No shared files mode enabled, IPC is disabled 00:10:27.652 00:10:27.652 real 0m1.122s 00:10:27.652 user 0m0.646s 00:10:27.652 sys 0m0.450s 00:10:27.652 17:27:24 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.652 17:27:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:27.652 ************************************ 00:10:27.652 END TEST env_vtophys 00:10:27.652 ************************************ 00:10:27.652 17:27:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:10:27.652 17:27:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:27.652 17:27:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.652 17:27:24 env -- common/autotest_common.sh@10 -- # set +x 00:10:27.652 ************************************ 00:10:27.652 START TEST env_pci 00:10:27.652 ************************************ 00:10:27.652 17:27:24 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:10:27.652 00:10:27.652 00:10:27.652 CUnit - A unit testing framework for C - Version 2.1-3 00:10:27.652 http://cunit.sourceforge.net/ 00:10:27.652 00:10:27.652 00:10:27.652 Suite: pci 00:10:27.652 Test: pci_hook ...[2024-10-14 17:27:24.617260] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1112:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2091678 has claimed it 00:10:27.652 EAL: Cannot find device (10000:00:01.0) 00:10:27.652 EAL: Failed to attach device on primary process 00:10:27.652 passed 00:10:27.652 00:10:27.652 Run Summary: Type Total Ran Passed Failed Inactive 00:10:27.652 suites 1 1 n/a 0 0 00:10:27.652 tests 1 1 1 0 0 00:10:27.652 asserts 25 25 25 0 n/a 00:10:27.652 00:10:27.652 Elapsed time = 0.034 seconds 00:10:27.652 00:10:27.652 real 0m0.055s 00:10:27.652 user 0m0.016s 00:10:27.652 sys 0m0.039s 00:10:27.652 17:27:24 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.652 17:27:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:27.652 ************************************ 00:10:27.652 END TEST env_pci 00:10:27.652 ************************************ 00:10:27.652 17:27:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:27.652 17:27:24 env -- env/env.sh@15 -- # uname 00:10:27.652 17:27:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:27.652 17:27:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:27.652 17:27:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:27.652 17:27:24 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:27.652 17:27:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.652 17:27:24 env -- common/autotest_common.sh@10 -- # set +x 00:10:27.911 ************************************ 00:10:27.911 START TEST env_dpdk_post_init 00:10:27.911 ************************************ 00:10:27.911 17:27:24 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:27.912 EAL: Detected CPU lcores: 72 00:10:27.912 EAL: Detected NUMA nodes: 2 00:10:27.912 EAL: Detected static linkage of DPDK 00:10:27.912 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:27.912 EAL: Selected IOVA mode 'VA' 00:10:27.912 EAL: VFIO support initialized 00:10:27.912 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:27.912 EAL: Using IOMMU type 1 (Type 1) 00:10:28.847 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:10:34.121 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:10:34.121 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:10:34.381 Starting DPDK initialization... 00:10:34.381 Starting SPDK post initialization... 00:10:34.381 SPDK NVMe probe 00:10:34.381 Attaching to 0000:5e:00.0 00:10:34.381 Attached to 0000:5e:00.0 00:10:34.381 Cleaning up... 00:10:34.381 00:10:34.381 real 0m6.492s 00:10:34.381 user 0m4.674s 00:10:34.381 sys 0m1.068s 00:10:34.381 17:27:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.381 17:27:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:34.381 ************************************ 00:10:34.381 END TEST env_dpdk_post_init 00:10:34.381 ************************************ 00:10:34.381 17:27:31 env -- env/env.sh@26 -- # uname 00:10:34.381 17:27:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:34.381 17:27:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:34.381 17:27:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.381 17:27:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.381 17:27:31 env -- common/autotest_common.sh@10 -- # set +x 00:10:34.381 ************************************ 00:10:34.381 START TEST env_mem_callbacks 00:10:34.381 ************************************ 00:10:34.381 17:27:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:34.381 EAL: Detected CPU lcores: 72 00:10:34.381 EAL: Detected NUMA nodes: 2 00:10:34.381 EAL: Detected static linkage of DPDK 00:10:34.381 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:34.381 EAL: Selected IOVA mode 'VA' 00:10:34.381 EAL: VFIO support initialized 00:10:34.381 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:34.381 00:10:34.381 00:10:34.381 CUnit - A unit testing framework for C - Version 2.1-3 00:10:34.381 http://cunit.sourceforge.net/ 00:10:34.381 00:10:34.381 00:10:34.381 Suite: memory 00:10:34.381 Test: test ... 00:10:34.381 register 0x200000200000 2097152 00:10:34.381 malloc 3145728 00:10:34.381 register 0x200000400000 4194304 00:10:34.381 buf 0x200000500000 len 3145728 PASSED 00:10:34.381 malloc 64 00:10:34.381 buf 0x2000004fff40 len 64 PASSED 00:10:34.381 malloc 4194304 00:10:34.381 register 0x200000800000 6291456 00:10:34.381 buf 0x200000a00000 len 4194304 PASSED 00:10:34.381 free 0x200000500000 3145728 00:10:34.381 free 0x2000004fff40 64 00:10:34.381 unregister 0x200000400000 4194304 PASSED 00:10:34.381 free 0x200000a00000 4194304 00:10:34.381 unregister 0x200000800000 6291456 PASSED 00:10:34.381 malloc 8388608 00:10:34.381 register 0x200000400000 10485760 00:10:34.381 buf 0x200000600000 len 8388608 PASSED 00:10:34.381 free 0x200000600000 8388608 00:10:34.381 unregister 0x200000400000 10485760 PASSED 00:10:34.381 passed 00:10:34.381 00:10:34.381 Run Summary: Type Total Ran Passed Failed Inactive 00:10:34.381 suites 1 1 n/a 0 0 00:10:34.381 tests 1 1 1 0 0 00:10:34.381 asserts 15 15 15 0 n/a 00:10:34.381 00:10:34.381 Elapsed time = 0.009 seconds 00:10:34.381 00:10:34.381 real 0m0.073s 00:10:34.381 user 0m0.015s 00:10:34.381 sys 0m0.057s 00:10:34.381 17:27:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.381 17:27:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:34.381 ************************************ 00:10:34.381 END TEST env_mem_callbacks 00:10:34.381 ************************************ 00:10:34.381 00:10:34.381 real 0m8.475s 00:10:34.381 user 0m5.687s 00:10:34.381 sys 0m2.058s 00:10:34.381 17:27:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.381 17:27:31 env -- common/autotest_common.sh@10 -- # set +x 00:10:34.381 ************************************ 00:10:34.381 END TEST env 00:10:34.381 ************************************ 00:10:34.640 17:27:31 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:10:34.640 17:27:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:34.640 17:27:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.640 17:27:31 -- common/autotest_common.sh@10 -- # set +x 00:10:34.640 ************************************ 00:10:34.640 START TEST rpc 00:10:34.640 ************************************ 00:10:34.640 17:27:31 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:10:34.640 * Looking for test storage... 00:10:34.640 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:34.640 17:27:31 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:34.640 17:27:31 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:34.640 17:27:31 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.640 17:27:31 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.640 17:27:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.640 17:27:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.640 17:27:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.640 17:27:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.640 17:27:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.640 17:27:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.640 17:27:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.640 17:27:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.640 17:27:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.640 17:27:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:34.640 17:27:31 rpc -- scripts/common.sh@345 -- # : 1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.640 17:27:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.640 17:27:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@353 -- # local d=1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.640 17:27:31 rpc -- scripts/common.sh@355 -- # echo 1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.640 17:27:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:34.640 17:27:31 rpc -- scripts/common.sh@353 -- # local d=2 00:10:34.640 17:27:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.640 17:27:31 rpc -- scripts/common.sh@355 -- # echo 2 00:10:34.900 17:27:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.900 17:27:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.900 17:27:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.900 17:27:31 rpc -- scripts/common.sh@368 -- # return 0 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.900 --rc genhtml_branch_coverage=1 00:10:34.900 --rc genhtml_function_coverage=1 00:10:34.900 --rc genhtml_legend=1 00:10:34.900 --rc geninfo_all_blocks=1 00:10:34.900 --rc geninfo_unexecuted_blocks=1 00:10:34.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.900 ' 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.900 --rc genhtml_branch_coverage=1 00:10:34.900 --rc genhtml_function_coverage=1 00:10:34.900 --rc genhtml_legend=1 00:10:34.900 --rc geninfo_all_blocks=1 00:10:34.900 --rc geninfo_unexecuted_blocks=1 00:10:34.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.900 ' 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.900 --rc genhtml_branch_coverage=1 00:10:34.900 --rc genhtml_function_coverage=1 00:10:34.900 --rc genhtml_legend=1 00:10:34.900 --rc geninfo_all_blocks=1 00:10:34.900 --rc geninfo_unexecuted_blocks=1 00:10:34.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.900 ' 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.900 --rc genhtml_branch_coverage=1 00:10:34.900 --rc genhtml_function_coverage=1 00:10:34.900 --rc genhtml_legend=1 00:10:34.900 --rc geninfo_all_blocks=1 00:10:34.900 --rc geninfo_unexecuted_blocks=1 00:10:34.900 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.900 ' 00:10:34.900 17:27:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2092755 00:10:34.900 17:27:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:34.900 17:27:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:10:34.900 17:27:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2092755 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 2092755 ']' 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.900 17:27:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.900 [2024-10-14 17:27:31.761594] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:34.900 [2024-10-14 17:27:31.761661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092755 ] 00:10:34.900 [2024-10-14 17:27:31.839794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.900 [2024-10-14 17:27:31.887440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:34.900 [2024-10-14 17:27:31.887481] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2092755' to capture a snapshot of events at runtime. 00:10:34.900 [2024-10-14 17:27:31.887491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.900 [2024-10-14 17:27:31.887499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.900 [2024-10-14 17:27:31.887506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2092755 for offline analysis/debug. 00:10:34.900 [2024-10-14 17:27:31.887904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.159 17:27:32 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.159 17:27:32 rpc -- common/autotest_common.sh@864 -- # return 0 00:10:35.159 17:27:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:35.159 17:27:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:35.159 17:27:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:35.159 17:27:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:35.159 17:27:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.159 17:27:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.159 17:27:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 ************************************ 00:10:35.159 START TEST rpc_integrity 00:10:35.159 ************************************ 00:10:35.159 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:10:35.159 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:35.159 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.159 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.159 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:35.159 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:35.159 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:35.159 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:35.159 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.159 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.160 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:35.160 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:35.160 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.160 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.160 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:35.160 { 00:10:35.160 "name": "Malloc0", 00:10:35.160 "aliases": [ 00:10:35.160 "05b82db5-baf7-4c13-b224-b035f60752b0" 00:10:35.160 ], 00:10:35.160 "product_name": "Malloc disk", 00:10:35.160 "block_size": 512, 00:10:35.160 "num_blocks": 16384, 00:10:35.160 "uuid": "05b82db5-baf7-4c13-b224-b035f60752b0", 00:10:35.160 "assigned_rate_limits": { 00:10:35.160 "rw_ios_per_sec": 0, 00:10:35.160 "rw_mbytes_per_sec": 0, 00:10:35.160 "r_mbytes_per_sec": 0, 00:10:35.160 "w_mbytes_per_sec": 0 00:10:35.160 }, 00:10:35.160 "claimed": false, 00:10:35.160 "zoned": false, 00:10:35.160 "supported_io_types": { 00:10:35.160 "read": true, 00:10:35.160 "write": true, 00:10:35.160 "unmap": true, 00:10:35.160 "flush": true, 00:10:35.160 "reset": true, 00:10:35.160 "nvme_admin": false, 00:10:35.160 "nvme_io": false, 00:10:35.160 "nvme_io_md": false, 00:10:35.160 "write_zeroes": true, 00:10:35.160 "zcopy": true, 00:10:35.160 "get_zone_info": false, 00:10:35.160 "zone_management": false, 00:10:35.160 "zone_append": false, 00:10:35.160 "compare": false, 00:10:35.160 "compare_and_write": false, 00:10:35.160 "abort": true, 00:10:35.160 "seek_hole": false, 00:10:35.160 "seek_data": false, 00:10:35.160 "copy": true, 00:10:35.160 "nvme_iov_md": false 00:10:35.160 }, 00:10:35.160 "memory_domains": [ 00:10:35.160 { 00:10:35.160 "dma_device_id": "system", 00:10:35.160 "dma_device_type": 1 00:10:35.160 }, 00:10:35.160 { 00:10:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.160 "dma_device_type": 2 00:10:35.160 } 00:10:35.160 ], 00:10:35.160 "driver_specific": {} 00:10:35.160 } 00:10:35.160 ]' 00:10:35.160 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:35.419 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:35.419 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:35.419 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.419 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.419 [2024-10-14 17:27:32.272179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:35.419 [2024-10-14 17:27:32.272209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.419 [2024-10-14 17:27:32.272226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5c71830 00:10:35.419 [2024-10-14 17:27:32.272235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.419 [2024-10-14 17:27:32.273166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.419 [2024-10-14 17:27:32.273189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:35.419 Passthru0 00:10:35.419 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.419 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:35.419 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.419 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.419 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.419 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:35.419 { 00:10:35.419 "name": "Malloc0", 00:10:35.419 "aliases": [ 00:10:35.419 "05b82db5-baf7-4c13-b224-b035f60752b0" 00:10:35.419 ], 00:10:35.419 "product_name": "Malloc disk", 00:10:35.419 "block_size": 512, 00:10:35.419 "num_blocks": 16384, 00:10:35.419 "uuid": "05b82db5-baf7-4c13-b224-b035f60752b0", 00:10:35.419 "assigned_rate_limits": { 00:10:35.419 "rw_ios_per_sec": 0, 00:10:35.419 "rw_mbytes_per_sec": 0, 00:10:35.419 "r_mbytes_per_sec": 0, 00:10:35.419 "w_mbytes_per_sec": 0 00:10:35.419 }, 00:10:35.419 "claimed": true, 00:10:35.419 "claim_type": "exclusive_write", 00:10:35.419 "zoned": false, 00:10:35.419 "supported_io_types": { 00:10:35.419 "read": true, 00:10:35.419 "write": true, 00:10:35.419 "unmap": true, 00:10:35.419 "flush": true, 00:10:35.419 "reset": true, 00:10:35.419 "nvme_admin": false, 00:10:35.419 "nvme_io": false, 00:10:35.419 "nvme_io_md": false, 00:10:35.419 "write_zeroes": true, 00:10:35.419 "zcopy": true, 00:10:35.419 "get_zone_info": false, 00:10:35.419 "zone_management": false, 00:10:35.419 "zone_append": false, 00:10:35.419 "compare": false, 00:10:35.419 "compare_and_write": false, 00:10:35.419 "abort": true, 00:10:35.419 "seek_hole": false, 00:10:35.419 "seek_data": false, 00:10:35.419 "copy": true, 00:10:35.419 "nvme_iov_md": false 00:10:35.420 }, 00:10:35.420 "memory_domains": [ 00:10:35.420 { 00:10:35.420 "dma_device_id": "system", 00:10:35.420 "dma_device_type": 1 00:10:35.420 }, 00:10:35.420 { 00:10:35.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.420 "dma_device_type": 2 00:10:35.420 } 00:10:35.420 ], 00:10:35.420 "driver_specific": {} 00:10:35.420 }, 00:10:35.420 { 00:10:35.420 "name": "Passthru0", 00:10:35.420 "aliases": [ 00:10:35.420 "69b5559a-8442-588b-b3f8-7fb9280b1a97" 00:10:35.420 ], 00:10:35.420 "product_name": "passthru", 00:10:35.420 "block_size": 512, 00:10:35.420 "num_blocks": 16384, 00:10:35.420 "uuid": "69b5559a-8442-588b-b3f8-7fb9280b1a97", 00:10:35.420 "assigned_rate_limits": { 00:10:35.420 "rw_ios_per_sec": 0, 00:10:35.420 "rw_mbytes_per_sec": 0, 00:10:35.420 "r_mbytes_per_sec": 0, 00:10:35.420 "w_mbytes_per_sec": 0 00:10:35.420 }, 00:10:35.420 "claimed": false, 00:10:35.420 "zoned": false, 00:10:35.420 "supported_io_types": { 00:10:35.420 "read": true, 00:10:35.420 "write": true, 00:10:35.420 "unmap": true, 00:10:35.420 "flush": true, 00:10:35.420 "reset": true, 00:10:35.420 "nvme_admin": false, 00:10:35.420 "nvme_io": false, 00:10:35.420 "nvme_io_md": false, 00:10:35.420 "write_zeroes": true, 00:10:35.420 "zcopy": true, 00:10:35.420 "get_zone_info": false, 00:10:35.420 "zone_management": false, 00:10:35.420 "zone_append": false, 00:10:35.420 "compare": false, 00:10:35.420 "compare_and_write": false, 00:10:35.420 "abort": true, 00:10:35.420 "seek_hole": false, 00:10:35.420 "seek_data": false, 00:10:35.420 "copy": true, 00:10:35.420 "nvme_iov_md": false 00:10:35.420 }, 00:10:35.420 "memory_domains": [ 00:10:35.420 { 00:10:35.420 "dma_device_id": "system", 00:10:35.420 "dma_device_type": 1 00:10:35.420 }, 00:10:35.420 { 00:10:35.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.420 "dma_device_type": 2 00:10:35.420 } 00:10:35.420 ], 00:10:35.420 "driver_specific": { 00:10:35.420 "passthru": { 00:10:35.420 "name": "Passthru0", 00:10:35.420 "base_bdev_name": "Malloc0" 00:10:35.420 } 00:10:35.420 } 00:10:35.420 } 00:10:35.420 ]' 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:35.420 17:27:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:35.420 00:10:35.420 real 0m0.301s 00:10:35.420 user 0m0.185s 00:10:35.420 sys 0m0.054s 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.420 17:27:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:35.420 ************************************ 00:10:35.420 END TEST rpc_integrity 00:10:35.420 ************************************ 00:10:35.420 17:27:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:35.420 17:27:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.420 17:27:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.420 17:27:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.679 ************************************ 00:10:35.679 START TEST rpc_plugins 00:10:35.679 ************************************ 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:35.679 { 00:10:35.679 "name": "Malloc1", 00:10:35.679 "aliases": [ 00:10:35.679 "7fa985e7-a6bc-47ed-a5f9-64ce6841021b" 00:10:35.679 ], 00:10:35.679 "product_name": "Malloc disk", 00:10:35.679 "block_size": 4096, 00:10:35.679 "num_blocks": 256, 00:10:35.679 "uuid": "7fa985e7-a6bc-47ed-a5f9-64ce6841021b", 00:10:35.679 "assigned_rate_limits": { 00:10:35.679 "rw_ios_per_sec": 0, 00:10:35.679 "rw_mbytes_per_sec": 0, 00:10:35.679 "r_mbytes_per_sec": 0, 00:10:35.679 "w_mbytes_per_sec": 0 00:10:35.679 }, 00:10:35.679 "claimed": false, 00:10:35.679 "zoned": false, 00:10:35.679 "supported_io_types": { 00:10:35.679 "read": true, 00:10:35.679 "write": true, 00:10:35.679 "unmap": true, 00:10:35.679 "flush": true, 00:10:35.679 "reset": true, 00:10:35.679 "nvme_admin": false, 00:10:35.679 "nvme_io": false, 00:10:35.679 "nvme_io_md": false, 00:10:35.679 "write_zeroes": true, 00:10:35.679 "zcopy": true, 00:10:35.679 "get_zone_info": false, 00:10:35.679 "zone_management": false, 00:10:35.679 "zone_append": false, 00:10:35.679 "compare": false, 00:10:35.679 "compare_and_write": false, 00:10:35.679 "abort": true, 00:10:35.679 "seek_hole": false, 00:10:35.679 "seek_data": false, 00:10:35.679 "copy": true, 00:10:35.679 "nvme_iov_md": false 00:10:35.679 }, 00:10:35.679 "memory_domains": [ 00:10:35.679 { 00:10:35.679 "dma_device_id": "system", 00:10:35.679 "dma_device_type": 1 00:10:35.679 }, 00:10:35.679 { 00:10:35.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.679 "dma_device_type": 2 00:10:35.679 } 00:10:35.679 ], 00:10:35.679 "driver_specific": {} 00:10:35.679 } 00:10:35.679 ]' 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:35.679 17:27:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:35.679 00:10:35.679 real 0m0.151s 00:10:35.679 user 0m0.093s 00:10:35.679 sys 0m0.024s 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.679 17:27:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:35.679 ************************************ 00:10:35.679 END TEST rpc_plugins 00:10:35.679 ************************************ 00:10:35.680 17:27:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:35.680 17:27:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.680 17:27:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.680 17:27:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.680 ************************************ 00:10:35.680 START TEST rpc_trace_cmd_test 00:10:35.680 ************************************ 00:10:35.680 17:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:10:35.680 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:35.680 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:35.680 17:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.680 17:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:35.939 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2092755", 00:10:35.939 "tpoint_group_mask": "0x8", 00:10:35.939 "iscsi_conn": { 00:10:35.939 "mask": "0x2", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "scsi": { 00:10:35.939 "mask": "0x4", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "bdev": { 00:10:35.939 "mask": "0x8", 00:10:35.939 "tpoint_mask": "0xffffffffffffffff" 00:10:35.939 }, 00:10:35.939 "nvmf_rdma": { 00:10:35.939 "mask": "0x10", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "nvmf_tcp": { 00:10:35.939 "mask": "0x20", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "ftl": { 00:10:35.939 "mask": "0x40", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "blobfs": { 00:10:35.939 "mask": "0x80", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "dsa": { 00:10:35.939 "mask": "0x200", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "thread": { 00:10:35.939 "mask": "0x400", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "nvme_pcie": { 00:10:35.939 "mask": "0x800", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "iaa": { 00:10:35.939 "mask": "0x1000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "nvme_tcp": { 00:10:35.939 "mask": "0x2000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "bdev_nvme": { 00:10:35.939 "mask": "0x4000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "sock": { 00:10:35.939 "mask": "0x8000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "blob": { 00:10:35.939 "mask": "0x10000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "bdev_raid": { 00:10:35.939 "mask": "0x20000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 }, 00:10:35.939 "scheduler": { 00:10:35.939 "mask": "0x40000", 00:10:35.939 "tpoint_mask": "0x0" 00:10:35.939 } 00:10:35.939 }' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:35.939 00:10:35.939 real 0m0.235s 00:10:35.939 user 0m0.192s 00:10:35.939 sys 0m0.035s 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.939 17:27:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.939 ************************************ 00:10:35.939 END TEST rpc_trace_cmd_test 00:10:35.939 ************************************ 00:10:35.939 17:27:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:35.939 17:27:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:35.939 17:27:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:35.939 17:27:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.939 17:27:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.939 17:27:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.198 ************************************ 00:10:36.198 START TEST rpc_daemon_integrity 00:10:36.198 ************************************ 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:36.198 { 00:10:36.198 "name": "Malloc2", 00:10:36.198 "aliases": [ 00:10:36.198 "3b7c51ba-3141-4e5b-b8e8-7c1057a78fe4" 00:10:36.198 ], 00:10:36.198 "product_name": "Malloc disk", 00:10:36.198 "block_size": 512, 00:10:36.198 "num_blocks": 16384, 00:10:36.198 "uuid": "3b7c51ba-3141-4e5b-b8e8-7c1057a78fe4", 00:10:36.198 "assigned_rate_limits": { 00:10:36.198 "rw_ios_per_sec": 0, 00:10:36.198 "rw_mbytes_per_sec": 0, 00:10:36.198 "r_mbytes_per_sec": 0, 00:10:36.198 "w_mbytes_per_sec": 0 00:10:36.198 }, 00:10:36.198 "claimed": false, 00:10:36.198 "zoned": false, 00:10:36.198 "supported_io_types": { 00:10:36.198 "read": true, 00:10:36.198 "write": true, 00:10:36.198 "unmap": true, 00:10:36.198 "flush": true, 00:10:36.198 "reset": true, 00:10:36.198 "nvme_admin": false, 00:10:36.198 "nvme_io": false, 00:10:36.198 "nvme_io_md": false, 00:10:36.198 "write_zeroes": true, 00:10:36.198 "zcopy": true, 00:10:36.198 "get_zone_info": false, 00:10:36.198 "zone_management": false, 00:10:36.198 "zone_append": false, 00:10:36.198 "compare": false, 00:10:36.198 "compare_and_write": false, 00:10:36.198 "abort": true, 00:10:36.198 "seek_hole": false, 00:10:36.198 "seek_data": false, 00:10:36.198 "copy": true, 00:10:36.198 "nvme_iov_md": false 00:10:36.198 }, 00:10:36.198 "memory_domains": [ 00:10:36.198 { 00:10:36.198 "dma_device_id": "system", 00:10:36.198 "dma_device_type": 1 00:10:36.198 }, 00:10:36.198 { 00:10:36.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.198 "dma_device_type": 2 00:10:36.198 } 00:10:36.198 ], 00:10:36.198 "driver_specific": {} 00:10:36.198 } 00:10:36.198 ]' 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.198 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.198 [2024-10-14 17:27:33.210588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:36.198 [2024-10-14 17:27:33.210617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.198 [2024-10-14 17:27:33.210634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5d93bf0 00:10:36.198 [2024-10-14 17:27:33.210644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.198 [2024-10-14 17:27:33.211546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.199 [2024-10-14 17:27:33.211570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:36.199 Passthru0 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:36.199 { 00:10:36.199 "name": "Malloc2", 00:10:36.199 "aliases": [ 00:10:36.199 "3b7c51ba-3141-4e5b-b8e8-7c1057a78fe4" 00:10:36.199 ], 00:10:36.199 "product_name": "Malloc disk", 00:10:36.199 "block_size": 512, 00:10:36.199 "num_blocks": 16384, 00:10:36.199 "uuid": "3b7c51ba-3141-4e5b-b8e8-7c1057a78fe4", 00:10:36.199 "assigned_rate_limits": { 00:10:36.199 "rw_ios_per_sec": 0, 00:10:36.199 "rw_mbytes_per_sec": 0, 00:10:36.199 "r_mbytes_per_sec": 0, 00:10:36.199 "w_mbytes_per_sec": 0 00:10:36.199 }, 00:10:36.199 "claimed": true, 00:10:36.199 "claim_type": "exclusive_write", 00:10:36.199 "zoned": false, 00:10:36.199 "supported_io_types": { 00:10:36.199 "read": true, 00:10:36.199 "write": true, 00:10:36.199 "unmap": true, 00:10:36.199 "flush": true, 00:10:36.199 "reset": true, 00:10:36.199 "nvme_admin": false, 00:10:36.199 "nvme_io": false, 00:10:36.199 "nvme_io_md": false, 00:10:36.199 "write_zeroes": true, 00:10:36.199 "zcopy": true, 00:10:36.199 "get_zone_info": false, 00:10:36.199 "zone_management": false, 00:10:36.199 "zone_append": false, 00:10:36.199 "compare": false, 00:10:36.199 "compare_and_write": false, 00:10:36.199 "abort": true, 00:10:36.199 "seek_hole": false, 00:10:36.199 "seek_data": false, 00:10:36.199 "copy": true, 00:10:36.199 "nvme_iov_md": false 00:10:36.199 }, 00:10:36.199 "memory_domains": [ 00:10:36.199 { 00:10:36.199 "dma_device_id": "system", 00:10:36.199 "dma_device_type": 1 00:10:36.199 }, 00:10:36.199 { 00:10:36.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.199 "dma_device_type": 2 00:10:36.199 } 00:10:36.199 ], 00:10:36.199 "driver_specific": {} 00:10:36.199 }, 00:10:36.199 { 00:10:36.199 "name": "Passthru0", 00:10:36.199 "aliases": [ 00:10:36.199 "65c811e5-3fb3-5bd7-9406-d32b8e0e3836" 00:10:36.199 ], 00:10:36.199 "product_name": "passthru", 00:10:36.199 "block_size": 512, 00:10:36.199 "num_blocks": 16384, 00:10:36.199 "uuid": "65c811e5-3fb3-5bd7-9406-d32b8e0e3836", 00:10:36.199 "assigned_rate_limits": { 00:10:36.199 "rw_ios_per_sec": 0, 00:10:36.199 "rw_mbytes_per_sec": 0, 00:10:36.199 "r_mbytes_per_sec": 0, 00:10:36.199 "w_mbytes_per_sec": 0 00:10:36.199 }, 00:10:36.199 "claimed": false, 00:10:36.199 "zoned": false, 00:10:36.199 "supported_io_types": { 00:10:36.199 "read": true, 00:10:36.199 "write": true, 00:10:36.199 "unmap": true, 00:10:36.199 "flush": true, 00:10:36.199 "reset": true, 00:10:36.199 "nvme_admin": false, 00:10:36.199 "nvme_io": false, 00:10:36.199 "nvme_io_md": false, 00:10:36.199 "write_zeroes": true, 00:10:36.199 "zcopy": true, 00:10:36.199 "get_zone_info": false, 00:10:36.199 "zone_management": false, 00:10:36.199 "zone_append": false, 00:10:36.199 "compare": false, 00:10:36.199 "compare_and_write": false, 00:10:36.199 "abort": true, 00:10:36.199 "seek_hole": false, 00:10:36.199 "seek_data": false, 00:10:36.199 "copy": true, 00:10:36.199 "nvme_iov_md": false 00:10:36.199 }, 00:10:36.199 "memory_domains": [ 00:10:36.199 { 00:10:36.199 "dma_device_id": "system", 00:10:36.199 "dma_device_type": 1 00:10:36.199 }, 00:10:36.199 { 00:10:36.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.199 "dma_device_type": 2 00:10:36.199 } 00:10:36.199 ], 00:10:36.199 "driver_specific": { 00:10:36.199 "passthru": { 00:10:36.199 "name": "Passthru0", 00:10:36.199 "base_bdev_name": "Malloc2" 00:10:36.199 } 00:10:36.199 } 00:10:36.199 } 00:10:36.199 ]' 00:10:36.199 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:36.459 00:10:36.459 real 0m0.302s 00:10:36.459 user 0m0.189s 00:10:36.459 sys 0m0.051s 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.459 17:27:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:36.459 ************************************ 00:10:36.459 END TEST rpc_daemon_integrity 00:10:36.459 ************************************ 00:10:36.459 17:27:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:36.459 17:27:33 rpc -- rpc/rpc.sh@84 -- # killprocess 2092755 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@950 -- # '[' -z 2092755 ']' 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@954 -- # kill -0 2092755 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@955 -- # uname 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2092755 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2092755' 00:10:36.459 killing process with pid 2092755 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@969 -- # kill 2092755 00:10:36.459 17:27:33 rpc -- common/autotest_common.sh@974 -- # wait 2092755 00:10:36.718 00:10:36.718 real 0m2.218s 00:10:36.718 user 0m2.840s 00:10:36.718 sys 0m0.804s 00:10:36.718 17:27:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.718 17:27:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.718 ************************************ 00:10:36.718 END TEST rpc 00:10:36.718 ************************************ 00:10:36.718 17:27:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:36.718 17:27:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:36.718 17:27:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.718 17:27:33 -- common/autotest_common.sh@10 -- # set +x 00:10:36.978 ************************************ 00:10:36.978 START TEST skip_rpc 00:10:36.978 ************************************ 00:10:36.978 17:27:33 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:36.978 * Looking for test storage... 00:10:36.978 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:10:36.978 17:27:33 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:36.978 17:27:33 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:36.978 17:27:33 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:36.978 17:27:34 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.978 17:27:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:36.978 17:27:34 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.978 17:27:34 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:36.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.978 --rc genhtml_branch_coverage=1 00:10:36.978 --rc genhtml_function_coverage=1 00:10:36.979 --rc genhtml_legend=1 00:10:36.979 --rc geninfo_all_blocks=1 00:10:36.979 --rc geninfo_unexecuted_blocks=1 00:10:36.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.979 ' 00:10:36.979 17:27:34 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:36.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.979 --rc genhtml_branch_coverage=1 00:10:36.979 --rc genhtml_function_coverage=1 00:10:36.979 --rc genhtml_legend=1 00:10:36.979 --rc geninfo_all_blocks=1 00:10:36.979 --rc geninfo_unexecuted_blocks=1 00:10:36.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.979 ' 00:10:36.979 17:27:34 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:36.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.979 --rc genhtml_branch_coverage=1 00:10:36.979 --rc genhtml_function_coverage=1 00:10:36.979 --rc genhtml_legend=1 00:10:36.979 --rc geninfo_all_blocks=1 00:10:36.979 --rc geninfo_unexecuted_blocks=1 00:10:36.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.979 ' 00:10:36.979 17:27:34 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:36.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.979 --rc genhtml_branch_coverage=1 00:10:36.979 --rc genhtml_function_coverage=1 00:10:36.979 --rc genhtml_legend=1 00:10:36.979 --rc geninfo_all_blocks=1 00:10:36.979 --rc geninfo_unexecuted_blocks=1 00:10:36.979 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:36.979 ' 00:10:36.979 17:27:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:36.979 17:27:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:10:36.979 17:27:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:36.979 17:27:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:36.979 17:27:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.979 17:27:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.238 ************************************ 00:10:37.238 START TEST skip_rpc 00:10:37.238 ************************************ 00:10:37.238 17:27:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:10:37.238 17:27:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2093257 00:10:37.238 17:27:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:37.238 17:27:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:37.238 17:27:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:37.238 [2024-10-14 17:27:34.097749] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:37.238 [2024-10-14 17:27:34.097815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093257 ] 00:10:37.238 [2024-10-14 17:27:34.178454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.238 [2024-10-14 17:27:34.223063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:42.508 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2093257 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2093257 ']' 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2093257 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2093257 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2093257' 00:10:42.509 killing process with pid 2093257 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2093257 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2093257 00:10:42.509 00:10:42.509 real 0m5.368s 00:10:42.509 user 0m5.127s 00:10:42.509 sys 0m0.282s 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.509 17:27:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.509 ************************************ 00:10:42.509 END TEST skip_rpc 00:10:42.509 ************************************ 00:10:42.509 17:27:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:42.509 17:27:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:42.509 17:27:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.509 17:27:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.509 ************************************ 00:10:42.509 START TEST skip_rpc_with_json 00:10:42.509 ************************************ 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2094016 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2094016 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2094016 ']' 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.509 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:42.509 [2024-10-14 17:27:39.556343] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:42.509 [2024-10-14 17:27:39.556427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094016 ] 00:10:42.768 [2024-10-14 17:27:39.638312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.768 [2024-10-14 17:27:39.685284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:43.027 [2024-10-14 17:27:39.900460] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:43.027 request: 00:10:43.027 { 00:10:43.027 "trtype": "tcp", 00:10:43.027 "method": "nvmf_get_transports", 00:10:43.027 "req_id": 1 00:10:43.027 } 00:10:43.027 Got JSON-RPC error response 00:10:43.027 response: 00:10:43.027 { 00:10:43.027 "code": -19, 00:10:43.027 "message": "No such device" 00:10:43.027 } 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:43.027 [2024-10-14 17:27:39.912552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.027 17:27:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:43.027 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.027 17:27:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:43.027 { 00:10:43.027 "subsystems": [ 00:10:43.027 { 00:10:43.027 "subsystem": "scheduler", 00:10:43.027 "config": [ 00:10:43.027 { 00:10:43.027 "method": "framework_set_scheduler", 00:10:43.027 "params": { 00:10:43.027 "name": "static" 00:10:43.027 } 00:10:43.027 } 00:10:43.027 ] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "vmd", 00:10:43.027 "config": [] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "sock", 00:10:43.027 "config": [ 00:10:43.027 { 00:10:43.027 "method": "sock_set_default_impl", 00:10:43.027 "params": { 00:10:43.027 "impl_name": "posix" 00:10:43.027 } 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "method": "sock_impl_set_options", 00:10:43.027 "params": { 00:10:43.027 "impl_name": "ssl", 00:10:43.027 "recv_buf_size": 4096, 00:10:43.027 "send_buf_size": 4096, 00:10:43.027 "enable_recv_pipe": true, 00:10:43.027 "enable_quickack": false, 00:10:43.027 "enable_placement_id": 0, 00:10:43.027 "enable_zerocopy_send_server": true, 00:10:43.027 "enable_zerocopy_send_client": false, 00:10:43.027 "zerocopy_threshold": 0, 00:10:43.027 "tls_version": 0, 00:10:43.027 "enable_ktls": false 00:10:43.027 } 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "method": "sock_impl_set_options", 00:10:43.027 "params": { 00:10:43.027 "impl_name": "posix", 00:10:43.027 "recv_buf_size": 2097152, 00:10:43.027 "send_buf_size": 2097152, 00:10:43.027 "enable_recv_pipe": true, 00:10:43.027 "enable_quickack": false, 00:10:43.027 "enable_placement_id": 0, 00:10:43.027 "enable_zerocopy_send_server": true, 00:10:43.027 "enable_zerocopy_send_client": false, 00:10:43.027 "zerocopy_threshold": 0, 00:10:43.027 "tls_version": 0, 00:10:43.027 "enable_ktls": false 00:10:43.027 } 00:10:43.027 } 00:10:43.027 ] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "iobuf", 00:10:43.027 "config": [ 00:10:43.027 { 00:10:43.027 "method": "iobuf_set_options", 00:10:43.027 "params": { 00:10:43.027 "small_pool_count": 8192, 00:10:43.027 "large_pool_count": 1024, 00:10:43.027 "small_bufsize": 8192, 00:10:43.027 "large_bufsize": 135168 00:10:43.027 } 00:10:43.027 } 00:10:43.027 ] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "keyring", 00:10:43.027 "config": [] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "vfio_user_target", 00:10:43.027 "config": null 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "fsdev", 00:10:43.027 "config": [ 00:10:43.027 { 00:10:43.027 "method": "fsdev_set_opts", 00:10:43.027 "params": { 00:10:43.027 "fsdev_io_pool_size": 65535, 00:10:43.027 "fsdev_io_cache_size": 256 00:10:43.027 } 00:10:43.027 } 00:10:43.027 ] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "accel", 00:10:43.027 "config": [ 00:10:43.027 { 00:10:43.027 "method": "accel_set_options", 00:10:43.027 "params": { 00:10:43.027 "small_cache_size": 128, 00:10:43.027 "large_cache_size": 16, 00:10:43.027 "task_count": 2048, 00:10:43.027 "sequence_count": 2048, 00:10:43.027 "buf_count": 2048 00:10:43.027 } 00:10:43.027 } 00:10:43.027 ] 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "subsystem": "bdev", 00:10:43.027 "config": [ 00:10:43.027 { 00:10:43.027 "method": "bdev_set_options", 00:10:43.027 "params": { 00:10:43.027 "bdev_io_pool_size": 65535, 00:10:43.027 "bdev_io_cache_size": 256, 00:10:43.027 "bdev_auto_examine": true, 00:10:43.027 "iobuf_small_cache_size": 128, 00:10:43.027 "iobuf_large_cache_size": 16 00:10:43.027 } 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "method": "bdev_raid_set_options", 00:10:43.027 "params": { 00:10:43.027 "process_window_size_kb": 1024, 00:10:43.027 "process_max_bandwidth_mb_sec": 0 00:10:43.027 } 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "method": "bdev_nvme_set_options", 00:10:43.027 "params": { 00:10:43.027 "action_on_timeout": "none", 00:10:43.027 "timeout_us": 0, 00:10:43.027 "timeout_admin_us": 0, 00:10:43.027 "keep_alive_timeout_ms": 10000, 00:10:43.027 "arbitration_burst": 0, 00:10:43.027 "low_priority_weight": 0, 00:10:43.027 "medium_priority_weight": 0, 00:10:43.027 "high_priority_weight": 0, 00:10:43.027 "nvme_adminq_poll_period_us": 10000, 00:10:43.027 "nvme_ioq_poll_period_us": 0, 00:10:43.027 "io_queue_requests": 0, 00:10:43.027 "delay_cmd_submit": true, 00:10:43.027 "transport_retry_count": 4, 00:10:43.027 "bdev_retry_count": 3, 00:10:43.027 "transport_ack_timeout": 0, 00:10:43.027 "ctrlr_loss_timeout_sec": 0, 00:10:43.027 "reconnect_delay_sec": 0, 00:10:43.027 "fast_io_fail_timeout_sec": 0, 00:10:43.027 "disable_auto_failback": false, 00:10:43.027 "generate_uuids": false, 00:10:43.027 "transport_tos": 0, 00:10:43.027 "nvme_error_stat": false, 00:10:43.027 "rdma_srq_size": 0, 00:10:43.027 "io_path_stat": false, 00:10:43.027 "allow_accel_sequence": false, 00:10:43.027 "rdma_max_cq_size": 0, 00:10:43.027 "rdma_cm_event_timeout_ms": 0, 00:10:43.027 "dhchap_digests": [ 00:10:43.027 "sha256", 00:10:43.027 "sha384", 00:10:43.027 "sha512" 00:10:43.027 ], 00:10:43.027 "dhchap_dhgroups": [ 00:10:43.027 "null", 00:10:43.027 "ffdhe2048", 00:10:43.027 "ffdhe3072", 00:10:43.027 "ffdhe4096", 00:10:43.027 "ffdhe6144", 00:10:43.027 "ffdhe8192" 00:10:43.027 ] 00:10:43.027 } 00:10:43.027 }, 00:10:43.027 { 00:10:43.027 "method": "bdev_nvme_set_hotplug", 00:10:43.027 "params": { 00:10:43.028 "period_us": 100000, 00:10:43.028 "enable": false 00:10:43.028 } 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "method": "bdev_iscsi_set_options", 00:10:43.028 "params": { 00:10:43.028 "timeout_sec": 30 00:10:43.028 } 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "method": "bdev_wait_for_examine" 00:10:43.028 } 00:10:43.028 ] 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "nvmf", 00:10:43.028 "config": [ 00:10:43.028 { 00:10:43.028 "method": "nvmf_set_config", 00:10:43.028 "params": { 00:10:43.028 "discovery_filter": "match_any", 00:10:43.028 "admin_cmd_passthru": { 00:10:43.028 "identify_ctrlr": false 00:10:43.028 }, 00:10:43.028 "dhchap_digests": [ 00:10:43.028 "sha256", 00:10:43.028 "sha384", 00:10:43.028 "sha512" 00:10:43.028 ], 00:10:43.028 "dhchap_dhgroups": [ 00:10:43.028 "null", 00:10:43.028 "ffdhe2048", 00:10:43.028 "ffdhe3072", 00:10:43.028 "ffdhe4096", 00:10:43.028 "ffdhe6144", 00:10:43.028 "ffdhe8192" 00:10:43.028 ] 00:10:43.028 } 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "method": "nvmf_set_max_subsystems", 00:10:43.028 "params": { 00:10:43.028 "max_subsystems": 1024 00:10:43.028 } 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "method": "nvmf_set_crdt", 00:10:43.028 "params": { 00:10:43.028 "crdt1": 0, 00:10:43.028 "crdt2": 0, 00:10:43.028 "crdt3": 0 00:10:43.028 } 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "method": "nvmf_create_transport", 00:10:43.028 "params": { 00:10:43.028 "trtype": "TCP", 00:10:43.028 "max_queue_depth": 128, 00:10:43.028 "max_io_qpairs_per_ctrlr": 127, 00:10:43.028 "in_capsule_data_size": 4096, 00:10:43.028 "max_io_size": 131072, 00:10:43.028 "io_unit_size": 131072, 00:10:43.028 "max_aq_depth": 128, 00:10:43.028 "num_shared_buffers": 511, 00:10:43.028 "buf_cache_size": 4294967295, 00:10:43.028 "dif_insert_or_strip": false, 00:10:43.028 "zcopy": false, 00:10:43.028 "c2h_success": true, 00:10:43.028 "sock_priority": 0, 00:10:43.028 "abort_timeout_sec": 1, 00:10:43.028 "ack_timeout": 0, 00:10:43.028 "data_wr_pool_size": 0 00:10:43.028 } 00:10:43.028 } 00:10:43.028 ] 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "nbd", 00:10:43.028 "config": [] 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "ublk", 00:10:43.028 "config": [] 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "vhost_blk", 00:10:43.028 "config": [] 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "scsi", 00:10:43.028 "config": null 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "iscsi", 00:10:43.028 "config": [ 00:10:43.028 { 00:10:43.028 "method": "iscsi_set_options", 00:10:43.028 "params": { 00:10:43.028 "node_base": "iqn.2016-06.io.spdk", 00:10:43.028 "max_sessions": 128, 00:10:43.028 "max_connections_per_session": 2, 00:10:43.028 "max_queue_depth": 64, 00:10:43.028 "default_time2wait": 2, 00:10:43.028 "default_time2retain": 20, 00:10:43.028 "first_burst_length": 8192, 00:10:43.028 "immediate_data": true, 00:10:43.028 "allow_duplicated_isid": false, 00:10:43.028 "error_recovery_level": 0, 00:10:43.028 "nop_timeout": 60, 00:10:43.028 "nop_in_interval": 30, 00:10:43.028 "disable_chap": false, 00:10:43.028 "require_chap": false, 00:10:43.028 "mutual_chap": false, 00:10:43.028 "chap_group": 0, 00:10:43.028 "max_large_datain_per_connection": 64, 00:10:43.028 "max_r2t_per_connection": 4, 00:10:43.028 "pdu_pool_size": 36864, 00:10:43.028 "immediate_data_pool_size": 16384, 00:10:43.028 "data_out_pool_size": 2048 00:10:43.028 } 00:10:43.028 } 00:10:43.028 ] 00:10:43.028 }, 00:10:43.028 { 00:10:43.028 "subsystem": "vhost_scsi", 00:10:43.028 "config": [] 00:10:43.028 } 00:10:43.028 ] 00:10:43.028 } 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2094016 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2094016 ']' 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2094016 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.028 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094016 00:10:43.288 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:43.288 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:43.288 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094016' 00:10:43.288 killing process with pid 2094016 00:10:43.288 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2094016 00:10:43.288 17:27:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2094016 00:10:43.547 17:27:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2094041 00:10:43.547 17:27:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:43.547 17:27:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:48.821 17:27:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2094041 00:10:48.821 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2094041 ']' 00:10:48.821 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2094041 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094041 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094041' 00:10:48.822 killing process with pid 2094041 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2094041 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2094041 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:10:48.822 00:10:48.822 real 0m6.276s 00:10:48.822 user 0m5.943s 00:10:48.822 sys 0m0.639s 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:48.822 ************************************ 00:10:48.822 END TEST skip_rpc_with_json 00:10:48.822 ************************************ 00:10:48.822 17:27:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:48.822 17:27:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.822 17:27:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.822 17:27:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.822 ************************************ 00:10:48.822 START TEST skip_rpc_with_delay 00:10:48.822 ************************************ 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:48.822 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:48.822 [2024-10-14 17:27:45.912665] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:49.081 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:10:49.082 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.082 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.082 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.082 00:10:49.082 real 0m0.044s 00:10:49.082 user 0m0.020s 00:10:49.082 sys 0m0.024s 00:10:49.082 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.082 17:27:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:49.082 ************************************ 00:10:49.082 END TEST skip_rpc_with_delay 00:10:49.082 ************************************ 00:10:49.082 17:27:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:49.082 17:27:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:49.082 17:27:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:49.082 17:27:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:49.082 17:27:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.082 17:27:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.082 ************************************ 00:10:49.082 START TEST exit_on_failed_rpc_init 00:10:49.082 ************************************ 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2094877 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2094877 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2094877 ']' 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.082 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:49.082 [2024-10-14 17:27:46.043685] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:49.082 [2024-10-14 17:27:46.043765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094877 ] 00:10:49.082 [2024-10-14 17:27:46.107380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.082 [2024-10-14 17:27:46.155711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:49.341 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:49.341 [2024-10-14 17:27:46.395535] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:49.341 [2024-10-14 17:27:46.395623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094959 ] 00:10:49.601 [2024-10-14 17:27:46.477063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.601 [2024-10-14 17:27:46.521637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.601 [2024-10-14 17:27:46.521716] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:49.601 [2024-10-14 17:27:46.521729] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:49.601 [2024-10-14 17:27:46.521737] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2094877 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2094877 ']' 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2094877 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094877 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094877' 00:10:49.601 killing process with pid 2094877 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2094877 00:10:49.601 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2094877 00:10:49.861 00:10:49.861 real 0m0.899s 00:10:49.861 user 0m0.937s 00:10:49.861 sys 0m0.401s 00:10:49.861 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.861 17:27:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:49.861 ************************************ 00:10:49.861 END TEST exit_on_failed_rpc_init 00:10:49.861 ************************************ 00:10:50.120 17:27:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:10:50.120 00:10:50.120 real 0m13.120s 00:10:50.120 user 0m12.254s 00:10:50.120 sys 0m1.692s 00:10:50.120 17:27:46 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.121 17:27:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.121 ************************************ 00:10:50.121 END TEST skip_rpc 00:10:50.121 ************************************ 00:10:50.121 17:27:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:50.121 17:27:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:50.121 17:27:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.121 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:10:50.121 ************************************ 00:10:50.121 START TEST rpc_client 00:10:50.121 ************************************ 00:10:50.121 17:27:47 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:50.121 * Looking for test storage... 00:10:50.121 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:10:50.121 17:27:47 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.121 17:27:47 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.121 17:27:47 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.380 17:27:47 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.380 17:27:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.381 17:27:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.381 --rc genhtml_branch_coverage=1 00:10:50.381 --rc genhtml_function_coverage=1 00:10:50.381 --rc genhtml_legend=1 00:10:50.381 --rc geninfo_all_blocks=1 00:10:50.381 --rc geninfo_unexecuted_blocks=1 00:10:50.381 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.381 ' 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.381 --rc genhtml_branch_coverage=1 00:10:50.381 --rc genhtml_function_coverage=1 00:10:50.381 --rc genhtml_legend=1 00:10:50.381 --rc geninfo_all_blocks=1 00:10:50.381 --rc geninfo_unexecuted_blocks=1 00:10:50.381 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.381 ' 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.381 --rc genhtml_branch_coverage=1 00:10:50.381 --rc genhtml_function_coverage=1 00:10:50.381 --rc genhtml_legend=1 00:10:50.381 --rc geninfo_all_blocks=1 00:10:50.381 --rc geninfo_unexecuted_blocks=1 00:10:50.381 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.381 ' 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.381 --rc genhtml_branch_coverage=1 00:10:50.381 --rc genhtml_function_coverage=1 00:10:50.381 --rc genhtml_legend=1 00:10:50.381 --rc geninfo_all_blocks=1 00:10:50.381 --rc geninfo_unexecuted_blocks=1 00:10:50.381 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.381 ' 00:10:50.381 17:27:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:50.381 OK 00:10:50.381 17:27:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:50.381 00:10:50.381 real 0m0.217s 00:10:50.381 user 0m0.120s 00:10:50.381 sys 0m0.116s 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.381 17:27:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 ************************************ 00:10:50.381 END TEST rpc_client 00:10:50.381 ************************************ 00:10:50.381 17:27:47 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:10:50.381 17:27:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:50.381 17:27:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.381 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 ************************************ 00:10:50.381 START TEST json_config 00:10:50.381 ************************************ 00:10:50.381 17:27:47 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:10:50.381 17:27:47 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.381 17:27:47 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.381 17:27:47 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.381 17:27:47 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.381 17:27:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.381 17:27:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.381 17:27:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.381 17:27:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.381 17:27:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.381 17:27:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.381 17:27:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.381 17:27:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.381 17:27:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.381 17:27:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.381 17:27:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.641 17:27:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:50.641 17:27:47 json_config -- scripts/common.sh@345 -- # : 1 00:10:50.641 17:27:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.641 17:27:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.641 17:27:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:50.641 17:27:47 json_config -- scripts/common.sh@353 -- # local d=1 00:10:50.641 17:27:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.641 17:27:47 json_config -- scripts/common.sh@355 -- # echo 1 00:10:50.641 17:27:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.641 17:27:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:50.641 17:27:47 json_config -- scripts/common.sh@353 -- # local d=2 00:10:50.641 17:27:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.641 17:27:47 json_config -- scripts/common.sh@355 -- # echo 2 00:10:50.641 17:27:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.641 17:27:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.641 17:27:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.641 17:27:47 json_config -- scripts/common.sh@368 -- # return 0 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.641 --rc genhtml_branch_coverage=1 00:10:50.641 --rc genhtml_function_coverage=1 00:10:50.641 --rc genhtml_legend=1 00:10:50.641 --rc geninfo_all_blocks=1 00:10:50.641 --rc geninfo_unexecuted_blocks=1 00:10:50.641 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.641 ' 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.641 --rc genhtml_branch_coverage=1 00:10:50.641 --rc genhtml_function_coverage=1 00:10:50.641 --rc genhtml_legend=1 00:10:50.641 --rc geninfo_all_blocks=1 00:10:50.641 --rc geninfo_unexecuted_blocks=1 00:10:50.641 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.641 ' 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.641 --rc genhtml_branch_coverage=1 00:10:50.641 --rc genhtml_function_coverage=1 00:10:50.641 --rc genhtml_legend=1 00:10:50.641 --rc geninfo_all_blocks=1 00:10:50.641 --rc geninfo_unexecuted_blocks=1 00:10:50.641 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.641 ' 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.641 --rc genhtml_branch_coverage=1 00:10:50.641 --rc genhtml_function_coverage=1 00:10:50.641 --rc genhtml_legend=1 00:10:50.641 --rc geninfo_all_blocks=1 00:10:50.641 --rc geninfo_unexecuted_blocks=1 00:10:50.641 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.641 ' 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:50.641 17:27:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.641 17:27:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.641 17:27:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.641 17:27:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.641 17:27:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.641 17:27:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.641 17:27:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.641 17:27:47 json_config -- paths/export.sh@5 -- # export PATH 00:10:50.641 17:27:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@51 -- # : 0 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.641 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.641 17:27:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:50.641 WARNING: No tests are enabled so not running JSON configuration tests 00:10:50.641 17:27:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:50.641 00:10:50.641 real 0m0.198s 00:10:50.641 user 0m0.122s 00:10:50.641 sys 0m0.085s 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.641 17:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:50.641 ************************************ 00:10:50.641 END TEST json_config 00:10:50.641 ************************************ 00:10:50.641 17:27:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:50.641 17:27:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:50.641 17:27:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.641 17:27:47 -- common/autotest_common.sh@10 -- # set +x 00:10:50.641 ************************************ 00:10:50.641 START TEST json_config_extra_key 00:10:50.641 ************************************ 00:10:50.641 17:27:47 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:50.641 17:27:47 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.641 17:27:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.641 17:27:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.901 17:27:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.901 17:27:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.902 --rc genhtml_branch_coverage=1 00:10:50.902 --rc genhtml_function_coverage=1 00:10:50.902 --rc genhtml_legend=1 00:10:50.902 --rc geninfo_all_blocks=1 00:10:50.902 --rc geninfo_unexecuted_blocks=1 00:10:50.902 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.902 ' 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.902 --rc genhtml_branch_coverage=1 00:10:50.902 --rc genhtml_function_coverage=1 00:10:50.902 --rc genhtml_legend=1 00:10:50.902 --rc geninfo_all_blocks=1 00:10:50.902 --rc geninfo_unexecuted_blocks=1 00:10:50.902 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.902 ' 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.902 --rc genhtml_branch_coverage=1 00:10:50.902 --rc genhtml_function_coverage=1 00:10:50.902 --rc genhtml_legend=1 00:10:50.902 --rc geninfo_all_blocks=1 00:10:50.902 --rc geninfo_unexecuted_blocks=1 00:10:50.902 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.902 ' 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.902 --rc genhtml_branch_coverage=1 00:10:50.902 --rc genhtml_function_coverage=1 00:10:50.902 --rc genhtml_legend=1 00:10:50.902 --rc geninfo_all_blocks=1 00:10:50.902 --rc geninfo_unexecuted_blocks=1 00:10:50.902 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:50.902 ' 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:50.902 17:27:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.902 17:27:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.902 17:27:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.902 17:27:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.902 17:27:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.902 17:27:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.902 17:27:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.902 17:27:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:50.902 17:27:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.902 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.902 17:27:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:50.902 INFO: launching applications... 00:10:50.902 17:27:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2095308 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:50.902 Waiting for target to run... 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2095308 /var/tmp/spdk_tgt.sock 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2095308 ']' 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:50.902 17:27:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.902 17:27:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:50.902 [2024-10-14 17:27:47.829464] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:50.902 [2024-10-14 17:27:47.829532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095308 ] 00:10:51.161 [2024-10-14 17:27:48.122989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.161 [2024-10-14 17:27:48.161024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.730 17:27:48 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.730 17:27:48 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:51.730 00:10:51.730 17:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:51.730 INFO: shutting down applications... 00:10:51.730 17:27:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2095308 ]] 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2095308 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2095308 00:10:51.730 17:27:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:52.297 17:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:52.297 17:27:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:52.297 17:27:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2095308 00:10:52.297 17:27:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:52.297 17:27:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:52.297 17:27:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:52.298 17:27:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:52.298 SPDK target shutdown done 00:10:52.298 17:27:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:52.298 Success 00:10:52.298 00:10:52.298 real 0m1.582s 00:10:52.298 user 0m1.316s 00:10:52.298 sys 0m0.442s 00:10:52.298 17:27:49 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.298 17:27:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:52.298 ************************************ 00:10:52.298 END TEST json_config_extra_key 00:10:52.298 ************************************ 00:10:52.298 17:27:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:52.298 17:27:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.298 17:27:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.298 17:27:49 -- common/autotest_common.sh@10 -- # set +x 00:10:52.298 ************************************ 00:10:52.298 START TEST alias_rpc 00:10:52.298 ************************************ 00:10:52.298 17:27:49 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:52.298 * Looking for test storage... 00:10:52.298 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:10:52.298 17:27:49 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:52.298 17:27:49 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:52.298 17:27:49 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.558 17:27:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.558 --rc genhtml_branch_coverage=1 00:10:52.558 --rc genhtml_function_coverage=1 00:10:52.558 --rc genhtml_legend=1 00:10:52.558 --rc geninfo_all_blocks=1 00:10:52.558 --rc geninfo_unexecuted_blocks=1 00:10:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:52.558 ' 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.558 --rc genhtml_branch_coverage=1 00:10:52.558 --rc genhtml_function_coverage=1 00:10:52.558 --rc genhtml_legend=1 00:10:52.558 --rc geninfo_all_blocks=1 00:10:52.558 --rc geninfo_unexecuted_blocks=1 00:10:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:52.558 ' 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.558 --rc genhtml_branch_coverage=1 00:10:52.558 --rc genhtml_function_coverage=1 00:10:52.558 --rc genhtml_legend=1 00:10:52.558 --rc geninfo_all_blocks=1 00:10:52.558 --rc geninfo_unexecuted_blocks=1 00:10:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:52.558 ' 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.558 --rc genhtml_branch_coverage=1 00:10:52.558 --rc genhtml_function_coverage=1 00:10:52.558 --rc genhtml_legend=1 00:10:52.558 --rc geninfo_all_blocks=1 00:10:52.558 --rc geninfo_unexecuted_blocks=1 00:10:52.558 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:52.558 ' 00:10:52.558 17:27:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:52.558 17:27:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2095544 00:10:52.558 17:27:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:52.558 17:27:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2095544 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2095544 ']' 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.558 17:27:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.558 [2024-10-14 17:27:49.493554] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:52.558 [2024-10-14 17:27:49.493625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095544 ] 00:10:52.558 [2024-10-14 17:27:49.555510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.558 [2024-10-14 17:27:49.600136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.818 17:27:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.818 17:27:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:52.818 17:27:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:53.077 17:27:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2095544 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2095544 ']' 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2095544 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2095544 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2095544' 00:10:53.077 killing process with pid 2095544 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@969 -- # kill 2095544 00:10:53.077 17:27:50 alias_rpc -- common/autotest_common.sh@974 -- # wait 2095544 00:10:53.336 00:10:53.336 real 0m1.143s 00:10:53.336 user 0m1.146s 00:10:53.336 sys 0m0.453s 00:10:53.336 17:27:50 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.336 17:27:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.336 ************************************ 00:10:53.336 END TEST alias_rpc 00:10:53.336 ************************************ 00:10:53.596 17:27:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:53.596 17:27:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:53.596 17:27:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.596 17:27:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.596 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:10:53.596 ************************************ 00:10:53.596 START TEST spdkcli_tcp 00:10:53.596 ************************************ 00:10:53.596 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:53.596 * Looking for test storage... 00:10:53.596 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:10:53.596 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.596 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.596 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:53.596 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:53.596 17:27:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:53.856 17:27:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.856 17:27:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:53.856 17:27:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.856 17:27:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.856 17:27:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.856 17:27:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.856 --rc genhtml_branch_coverage=1 00:10:53.856 --rc genhtml_function_coverage=1 00:10:53.856 --rc genhtml_legend=1 00:10:53.856 --rc geninfo_all_blocks=1 00:10:53.856 --rc geninfo_unexecuted_blocks=1 00:10:53.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:53.856 ' 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.856 --rc genhtml_branch_coverage=1 00:10:53.856 --rc genhtml_function_coverage=1 00:10:53.856 --rc genhtml_legend=1 00:10:53.856 --rc geninfo_all_blocks=1 00:10:53.856 --rc geninfo_unexecuted_blocks=1 00:10:53.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:53.856 ' 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.856 --rc genhtml_branch_coverage=1 00:10:53.856 --rc genhtml_function_coverage=1 00:10:53.856 --rc genhtml_legend=1 00:10:53.856 --rc geninfo_all_blocks=1 00:10:53.856 --rc geninfo_unexecuted_blocks=1 00:10:53.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:53.856 ' 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.856 --rc genhtml_branch_coverage=1 00:10:53.856 --rc genhtml_function_coverage=1 00:10:53.856 --rc genhtml_legend=1 00:10:53.856 --rc geninfo_all_blocks=1 00:10:53.856 --rc geninfo_unexecuted_blocks=1 00:10:53.856 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:53.856 ' 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2095783 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:53.856 17:27:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2095783 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2095783 ']' 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.856 17:27:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.856 [2024-10-14 17:27:50.727075] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:53.856 [2024-10-14 17:27:50.727152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095783 ] 00:10:53.856 [2024-10-14 17:27:50.791482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.856 [2024-10-14 17:27:50.845049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.856 [2024-10-14 17:27:50.845050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.116 17:27:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.116 17:27:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:10:54.116 17:27:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2095790 00:10:54.116 17:27:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:54.116 17:27:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:54.376 [ 00:10:54.376 "spdk_get_version", 00:10:54.376 "rpc_get_methods", 00:10:54.376 "notify_get_notifications", 00:10:54.376 "notify_get_types", 00:10:54.376 "trace_get_info", 00:10:54.376 "trace_get_tpoint_group_mask", 00:10:54.376 "trace_disable_tpoint_group", 00:10:54.376 "trace_enable_tpoint_group", 00:10:54.376 "trace_clear_tpoint_mask", 00:10:54.376 "trace_set_tpoint_mask", 00:10:54.376 "fsdev_set_opts", 00:10:54.376 "fsdev_get_opts", 00:10:54.376 "framework_get_pci_devices", 00:10:54.376 "framework_get_config", 00:10:54.376 "framework_get_subsystems", 00:10:54.376 "vfu_tgt_set_base_path", 00:10:54.376 "keyring_get_keys", 00:10:54.376 "iobuf_get_stats", 00:10:54.376 "iobuf_set_options", 00:10:54.376 "sock_get_default_impl", 00:10:54.376 "sock_set_default_impl", 00:10:54.376 "sock_impl_set_options", 00:10:54.376 "sock_impl_get_options", 00:10:54.376 "vmd_rescan", 00:10:54.376 "vmd_remove_device", 00:10:54.376 "vmd_enable", 00:10:54.376 "accel_get_stats", 00:10:54.376 "accel_set_options", 00:10:54.376 "accel_set_driver", 00:10:54.376 "accel_crypto_key_destroy", 00:10:54.376 "accel_crypto_keys_get", 00:10:54.376 "accel_crypto_key_create", 00:10:54.376 "accel_assign_opc", 00:10:54.376 "accel_get_module_info", 00:10:54.376 "accel_get_opc_assignments", 00:10:54.376 "bdev_get_histogram", 00:10:54.376 "bdev_enable_histogram", 00:10:54.376 "bdev_set_qos_limit", 00:10:54.376 "bdev_set_qd_sampling_period", 00:10:54.376 "bdev_get_bdevs", 00:10:54.376 "bdev_reset_iostat", 00:10:54.376 "bdev_get_iostat", 00:10:54.376 "bdev_examine", 00:10:54.376 "bdev_wait_for_examine", 00:10:54.376 "bdev_set_options", 00:10:54.376 "scsi_get_devices", 00:10:54.376 "thread_set_cpumask", 00:10:54.376 "scheduler_set_options", 00:10:54.376 "framework_get_governor", 00:10:54.376 "framework_get_scheduler", 00:10:54.376 "framework_set_scheduler", 00:10:54.376 "framework_get_reactors", 00:10:54.376 "thread_get_io_channels", 00:10:54.376 "thread_get_pollers", 00:10:54.376 "thread_get_stats", 00:10:54.376 "framework_monitor_context_switch", 00:10:54.376 "spdk_kill_instance", 00:10:54.376 "log_enable_timestamps", 00:10:54.376 "log_get_flags", 00:10:54.376 "log_clear_flag", 00:10:54.376 "log_set_flag", 00:10:54.376 "log_get_level", 00:10:54.376 "log_set_level", 00:10:54.376 "log_get_print_level", 00:10:54.376 "log_set_print_level", 00:10:54.376 "framework_enable_cpumask_locks", 00:10:54.376 "framework_disable_cpumask_locks", 00:10:54.376 "framework_wait_init", 00:10:54.376 "framework_start_init", 00:10:54.376 "virtio_blk_create_transport", 00:10:54.376 "virtio_blk_get_transports", 00:10:54.376 "vhost_controller_set_coalescing", 00:10:54.376 "vhost_get_controllers", 00:10:54.376 "vhost_delete_controller", 00:10:54.376 "vhost_create_blk_controller", 00:10:54.376 "vhost_scsi_controller_remove_target", 00:10:54.376 "vhost_scsi_controller_add_target", 00:10:54.376 "vhost_start_scsi_controller", 00:10:54.376 "vhost_create_scsi_controller", 00:10:54.376 "ublk_recover_disk", 00:10:54.376 "ublk_get_disks", 00:10:54.376 "ublk_stop_disk", 00:10:54.376 "ublk_start_disk", 00:10:54.376 "ublk_destroy_target", 00:10:54.376 "ublk_create_target", 00:10:54.376 "nbd_get_disks", 00:10:54.376 "nbd_stop_disk", 00:10:54.376 "nbd_start_disk", 00:10:54.376 "env_dpdk_get_mem_stats", 00:10:54.376 "nvmf_stop_mdns_prr", 00:10:54.376 "nvmf_publish_mdns_prr", 00:10:54.376 "nvmf_subsystem_get_listeners", 00:10:54.376 "nvmf_subsystem_get_qpairs", 00:10:54.376 "nvmf_subsystem_get_controllers", 00:10:54.376 "nvmf_get_stats", 00:10:54.376 "nvmf_get_transports", 00:10:54.376 "nvmf_create_transport", 00:10:54.376 "nvmf_get_targets", 00:10:54.376 "nvmf_delete_target", 00:10:54.376 "nvmf_create_target", 00:10:54.376 "nvmf_subsystem_allow_any_host", 00:10:54.376 "nvmf_subsystem_set_keys", 00:10:54.376 "nvmf_subsystem_remove_host", 00:10:54.376 "nvmf_subsystem_add_host", 00:10:54.376 "nvmf_ns_remove_host", 00:10:54.376 "nvmf_ns_add_host", 00:10:54.376 "nvmf_subsystem_remove_ns", 00:10:54.376 "nvmf_subsystem_set_ns_ana_group", 00:10:54.376 "nvmf_subsystem_add_ns", 00:10:54.376 "nvmf_subsystem_listener_set_ana_state", 00:10:54.376 "nvmf_discovery_get_referrals", 00:10:54.376 "nvmf_discovery_remove_referral", 00:10:54.376 "nvmf_discovery_add_referral", 00:10:54.376 "nvmf_subsystem_remove_listener", 00:10:54.376 "nvmf_subsystem_add_listener", 00:10:54.376 "nvmf_delete_subsystem", 00:10:54.376 "nvmf_create_subsystem", 00:10:54.376 "nvmf_get_subsystems", 00:10:54.376 "nvmf_set_crdt", 00:10:54.376 "nvmf_set_config", 00:10:54.376 "nvmf_set_max_subsystems", 00:10:54.376 "iscsi_get_histogram", 00:10:54.376 "iscsi_enable_histogram", 00:10:54.376 "iscsi_set_options", 00:10:54.376 "iscsi_get_auth_groups", 00:10:54.376 "iscsi_auth_group_remove_secret", 00:10:54.376 "iscsi_auth_group_add_secret", 00:10:54.376 "iscsi_delete_auth_group", 00:10:54.376 "iscsi_create_auth_group", 00:10:54.376 "iscsi_set_discovery_auth", 00:10:54.376 "iscsi_get_options", 00:10:54.376 "iscsi_target_node_request_logout", 00:10:54.376 "iscsi_target_node_set_redirect", 00:10:54.376 "iscsi_target_node_set_auth", 00:10:54.376 "iscsi_target_node_add_lun", 00:10:54.376 "iscsi_get_stats", 00:10:54.376 "iscsi_get_connections", 00:10:54.376 "iscsi_portal_group_set_auth", 00:10:54.376 "iscsi_start_portal_group", 00:10:54.376 "iscsi_delete_portal_group", 00:10:54.376 "iscsi_create_portal_group", 00:10:54.376 "iscsi_get_portal_groups", 00:10:54.376 "iscsi_delete_target_node", 00:10:54.376 "iscsi_target_node_remove_pg_ig_maps", 00:10:54.376 "iscsi_target_node_add_pg_ig_maps", 00:10:54.376 "iscsi_create_target_node", 00:10:54.376 "iscsi_get_target_nodes", 00:10:54.376 "iscsi_delete_initiator_group", 00:10:54.376 "iscsi_initiator_group_remove_initiators", 00:10:54.376 "iscsi_initiator_group_add_initiators", 00:10:54.376 "iscsi_create_initiator_group", 00:10:54.376 "iscsi_get_initiator_groups", 00:10:54.376 "fsdev_aio_delete", 00:10:54.376 "fsdev_aio_create", 00:10:54.376 "keyring_linux_set_options", 00:10:54.376 "keyring_file_remove_key", 00:10:54.376 "keyring_file_add_key", 00:10:54.376 "vfu_virtio_create_fs_endpoint", 00:10:54.376 "vfu_virtio_create_scsi_endpoint", 00:10:54.376 "vfu_virtio_scsi_remove_target", 00:10:54.376 "vfu_virtio_scsi_add_target", 00:10:54.376 "vfu_virtio_create_blk_endpoint", 00:10:54.376 "vfu_virtio_delete_endpoint", 00:10:54.376 "iaa_scan_accel_module", 00:10:54.376 "dsa_scan_accel_module", 00:10:54.376 "ioat_scan_accel_module", 00:10:54.376 "accel_error_inject_error", 00:10:54.376 "bdev_iscsi_delete", 00:10:54.376 "bdev_iscsi_create", 00:10:54.376 "bdev_iscsi_set_options", 00:10:54.376 "bdev_virtio_attach_controller", 00:10:54.376 "bdev_virtio_scsi_get_devices", 00:10:54.376 "bdev_virtio_detach_controller", 00:10:54.376 "bdev_virtio_blk_set_hotplug", 00:10:54.376 "bdev_ftl_set_property", 00:10:54.376 "bdev_ftl_get_properties", 00:10:54.376 "bdev_ftl_get_stats", 00:10:54.376 "bdev_ftl_unmap", 00:10:54.376 "bdev_ftl_unload", 00:10:54.376 "bdev_ftl_delete", 00:10:54.376 "bdev_ftl_load", 00:10:54.376 "bdev_ftl_create", 00:10:54.376 "bdev_aio_delete", 00:10:54.376 "bdev_aio_rescan", 00:10:54.376 "bdev_aio_create", 00:10:54.376 "blobfs_create", 00:10:54.376 "blobfs_detect", 00:10:54.376 "blobfs_set_cache_size", 00:10:54.376 "bdev_zone_block_delete", 00:10:54.376 "bdev_zone_block_create", 00:10:54.376 "bdev_delay_delete", 00:10:54.376 "bdev_delay_create", 00:10:54.376 "bdev_delay_update_latency", 00:10:54.376 "bdev_split_delete", 00:10:54.376 "bdev_split_create", 00:10:54.376 "bdev_error_inject_error", 00:10:54.376 "bdev_error_delete", 00:10:54.376 "bdev_error_create", 00:10:54.376 "bdev_raid_set_options", 00:10:54.376 "bdev_raid_remove_base_bdev", 00:10:54.376 "bdev_raid_add_base_bdev", 00:10:54.376 "bdev_raid_delete", 00:10:54.376 "bdev_raid_create", 00:10:54.376 "bdev_raid_get_bdevs", 00:10:54.376 "bdev_lvol_set_parent_bdev", 00:10:54.376 "bdev_lvol_set_parent", 00:10:54.376 "bdev_lvol_check_shallow_copy", 00:10:54.376 "bdev_lvol_start_shallow_copy", 00:10:54.376 "bdev_lvol_grow_lvstore", 00:10:54.376 "bdev_lvol_get_lvols", 00:10:54.376 "bdev_lvol_get_lvstores", 00:10:54.376 "bdev_lvol_delete", 00:10:54.376 "bdev_lvol_set_read_only", 00:10:54.376 "bdev_lvol_resize", 00:10:54.376 "bdev_lvol_decouple_parent", 00:10:54.376 "bdev_lvol_inflate", 00:10:54.376 "bdev_lvol_rename", 00:10:54.376 "bdev_lvol_clone_bdev", 00:10:54.376 "bdev_lvol_clone", 00:10:54.376 "bdev_lvol_snapshot", 00:10:54.376 "bdev_lvol_create", 00:10:54.376 "bdev_lvol_delete_lvstore", 00:10:54.376 "bdev_lvol_rename_lvstore", 00:10:54.376 "bdev_lvol_create_lvstore", 00:10:54.376 "bdev_passthru_delete", 00:10:54.376 "bdev_passthru_create", 00:10:54.376 "bdev_nvme_cuse_unregister", 00:10:54.376 "bdev_nvme_cuse_register", 00:10:54.376 "bdev_opal_new_user", 00:10:54.376 "bdev_opal_set_lock_state", 00:10:54.376 "bdev_opal_delete", 00:10:54.376 "bdev_opal_get_info", 00:10:54.377 "bdev_opal_create", 00:10:54.377 "bdev_nvme_opal_revert", 00:10:54.377 "bdev_nvme_opal_init", 00:10:54.377 "bdev_nvme_send_cmd", 00:10:54.377 "bdev_nvme_set_keys", 00:10:54.377 "bdev_nvme_get_path_iostat", 00:10:54.377 "bdev_nvme_get_mdns_discovery_info", 00:10:54.377 "bdev_nvme_stop_mdns_discovery", 00:10:54.377 "bdev_nvme_start_mdns_discovery", 00:10:54.377 "bdev_nvme_set_multipath_policy", 00:10:54.377 "bdev_nvme_set_preferred_path", 00:10:54.377 "bdev_nvme_get_io_paths", 00:10:54.377 "bdev_nvme_remove_error_injection", 00:10:54.377 "bdev_nvme_add_error_injection", 00:10:54.377 "bdev_nvme_get_discovery_info", 00:10:54.377 "bdev_nvme_stop_discovery", 00:10:54.377 "bdev_nvme_start_discovery", 00:10:54.377 "bdev_nvme_get_controller_health_info", 00:10:54.377 "bdev_nvme_disable_controller", 00:10:54.377 "bdev_nvme_enable_controller", 00:10:54.377 "bdev_nvme_reset_controller", 00:10:54.377 "bdev_nvme_get_transport_statistics", 00:10:54.377 "bdev_nvme_apply_firmware", 00:10:54.377 "bdev_nvme_detach_controller", 00:10:54.377 "bdev_nvme_get_controllers", 00:10:54.377 "bdev_nvme_attach_controller", 00:10:54.377 "bdev_nvme_set_hotplug", 00:10:54.377 "bdev_nvme_set_options", 00:10:54.377 "bdev_null_resize", 00:10:54.377 "bdev_null_delete", 00:10:54.377 "bdev_null_create", 00:10:54.377 "bdev_malloc_delete", 00:10:54.377 "bdev_malloc_create" 00:10:54.377 ] 00:10:54.377 17:27:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.377 17:27:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:54.377 17:27:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2095783 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2095783 ']' 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2095783 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2095783 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2095783' 00:10:54.377 killing process with pid 2095783 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2095783 00:10:54.377 17:27:51 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2095783 00:10:54.636 00:10:54.636 real 0m1.172s 00:10:54.636 user 0m2.001s 00:10:54.636 sys 0m0.487s 00:10:54.636 17:27:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.636 17:27:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.636 ************************************ 00:10:54.636 END TEST spdkcli_tcp 00:10:54.636 ************************************ 00:10:54.636 17:27:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:54.636 17:27:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:54.636 17:27:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.636 17:27:51 -- common/autotest_common.sh@10 -- # set +x 00:10:54.896 ************************************ 00:10:54.896 START TEST dpdk_mem_utility 00:10:54.896 ************************************ 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:54.896 * Looking for test storage... 00:10:54.896 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.896 17:27:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:54.896 ' 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:54.896 ' 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:54.896 ' 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.896 --rc genhtml_branch_coverage=1 00:10:54.896 --rc genhtml_function_coverage=1 00:10:54.896 --rc genhtml_legend=1 00:10:54.896 --rc geninfo_all_blocks=1 00:10:54.896 --rc geninfo_unexecuted_blocks=1 00:10:54.896 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:54.896 ' 00:10:54.896 17:27:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:54.896 17:27:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2096031 00:10:54.896 17:27:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2096031 00:10:54.896 17:27:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2096031 ']' 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.896 17:27:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:54.896 [2024-10-14 17:27:51.975260] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:54.896 [2024-10-14 17:27:51.975328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096031 ] 00:10:55.155 [2024-10-14 17:27:52.056380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.155 [2024-10-14 17:27:52.103695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.414 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.414 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:10:55.414 17:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:55.414 17:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:55.414 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.414 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:55.414 { 00:10:55.414 "filename": "/tmp/spdk_mem_dump.txt" 00:10:55.414 } 00:10:55.414 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.415 17:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:55.415 DPDK memory size 810.000000 MiB in 1 heap(s) 00:10:55.415 1 heaps totaling size 810.000000 MiB 00:10:55.415 size: 810.000000 MiB heap id: 0 00:10:55.415 end heaps---------- 00:10:55.415 9 mempools totaling size 595.772034 MiB 00:10:55.415 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:55.415 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:55.415 size: 92.545471 MiB name: bdev_io_2096031 00:10:55.415 size: 50.003479 MiB name: msgpool_2096031 00:10:55.415 size: 36.509338 MiB name: fsdev_io_2096031 00:10:55.415 size: 21.763794 MiB name: PDU_Pool 00:10:55.415 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:55.415 size: 4.133484 MiB name: evtpool_2096031 00:10:55.415 size: 0.026123 MiB name: Session_Pool 00:10:55.415 end mempools------- 00:10:55.415 6 memzones totaling size 4.142822 MiB 00:10:55.415 size: 1.000366 MiB name: RG_ring_0_2096031 00:10:55.415 size: 1.000366 MiB name: RG_ring_1_2096031 00:10:55.415 size: 1.000366 MiB name: RG_ring_4_2096031 00:10:55.415 size: 1.000366 MiB name: RG_ring_5_2096031 00:10:55.415 size: 0.125366 MiB name: RG_ring_2_2096031 00:10:55.415 size: 0.015991 MiB name: RG_ring_3_2096031 00:10:55.415 end memzones------- 00:10:55.415 17:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:55.415 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:55.415 list of free elements. size: 10.862488 MiB 00:10:55.415 element at address: 0x200018a00000 with size: 0.999878 MiB 00:10:55.415 element at address: 0x200018c00000 with size: 0.999878 MiB 00:10:55.415 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:55.415 element at address: 0x200031800000 with size: 0.994446 MiB 00:10:55.415 element at address: 0x200008000000 with size: 0.959839 MiB 00:10:55.415 element at address: 0x200012c00000 with size: 0.954285 MiB 00:10:55.415 element at address: 0x200018e00000 with size: 0.936584 MiB 00:10:55.415 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:55.415 element at address: 0x20001a600000 with size: 0.582886 MiB 00:10:55.415 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:55.415 element at address: 0x200003e00000 with size: 0.490723 MiB 00:10:55.415 element at address: 0x200019000000 with size: 0.485657 MiB 00:10:55.415 element at address: 0x200010600000 with size: 0.481934 MiB 00:10:55.415 element at address: 0x200027a00000 with size: 0.410034 MiB 00:10:55.415 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:55.415 list of standard malloc elements. size: 199.218628 MiB 00:10:55.415 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:10:55.415 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:10:55.415 element at address: 0x200018afff80 with size: 1.000122 MiB 00:10:55.415 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:10:55.415 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:55.415 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:55.415 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:10:55.415 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:55.415 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:10:55.415 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20000085b100 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000008df880 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20001067b600 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:10:55.415 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20001a695380 with size: 0.000183 MiB 00:10:55.415 element at address: 0x20001a695440 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200027a69040 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:10:55.415 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:10:55.415 list of memzone associated elements. size: 599.918884 MiB 00:10:55.415 element at address: 0x20001a695500 with size: 211.416748 MiB 00:10:55.415 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:55.415 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:10:55.415 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:55.415 element at address: 0x200012df4780 with size: 92.045044 MiB 00:10:55.415 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2096031_0 00:10:55.415 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:55.415 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2096031_0 00:10:55.415 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:10:55.415 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2096031_0 00:10:55.415 element at address: 0x2000191be940 with size: 20.255554 MiB 00:10:55.415 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:55.415 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:10:55.415 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:55.415 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:55.415 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2096031_0 00:10:55.415 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:55.415 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2096031 00:10:55.415 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:55.415 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2096031 00:10:55.415 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:10:55.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:55.415 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:10:55.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:55.415 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:10:55.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:55.415 element at address: 0x200003efde40 with size: 1.008118 MiB 00:10:55.415 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:55.415 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:55.415 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2096031 00:10:55.415 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:55.415 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2096031 00:10:55.415 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:10:55.415 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2096031 00:10:55.415 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:10:55.415 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2096031 00:10:55.415 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:10:55.415 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2096031 00:10:55.415 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:55.415 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2096031 00:10:55.415 element at address: 0x20001067b780 with size: 0.500488 MiB 00:10:55.415 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:55.415 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:10:55.415 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:55.415 element at address: 0x20001907c540 with size: 0.250488 MiB 00:10:55.415 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:55.415 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:55.415 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2096031 00:10:55.415 element at address: 0x2000008df940 with size: 0.125488 MiB 00:10:55.415 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2096031 00:10:55.415 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:10:55.415 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:55.415 element at address: 0x200027a69100 with size: 0.023743 MiB 00:10:55.415 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:55.415 element at address: 0x2000008db680 with size: 0.016113 MiB 00:10:55.415 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2096031 00:10:55.415 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:10:55.415 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:55.415 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:55.415 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2096031 00:10:55.416 element at address: 0x2000008db480 with size: 0.000305 MiB 00:10:55.416 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2096031 00:10:55.416 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:55.416 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2096031 00:10:55.416 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:10:55.416 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:55.416 17:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:55.416 17:27:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2096031 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2096031 ']' 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2096031 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2096031 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2096031' 00:10:55.416 killing process with pid 2096031 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2096031 00:10:55.416 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2096031 00:10:55.982 00:10:55.982 real 0m1.010s 00:10:55.982 user 0m0.909s 00:10:55.982 sys 0m0.450s 00:10:55.982 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.982 17:27:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:55.982 ************************************ 00:10:55.982 END TEST dpdk_mem_utility 00:10:55.982 ************************************ 00:10:55.982 17:27:52 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:10:55.982 17:27:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:55.982 17:27:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.982 17:27:52 -- common/autotest_common.sh@10 -- # set +x 00:10:55.982 ************************************ 00:10:55.982 START TEST event 00:10:55.982 ************************************ 00:10:55.982 17:27:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:10:55.982 * Looking for test storage... 00:10:55.982 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:10:55.982 17:27:52 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.982 17:27:52 event -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.982 17:27:52 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.982 17:27:53 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.982 17:27:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.982 17:27:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.982 17:27:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.982 17:27:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.982 17:27:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.982 17:27:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.982 17:27:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.982 17:27:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.982 17:27:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.982 17:27:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.982 17:27:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.982 17:27:53 event -- scripts/common.sh@344 -- # case "$op" in 00:10:55.982 17:27:53 event -- scripts/common.sh@345 -- # : 1 00:10:55.982 17:27:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.982 17:27:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.982 17:27:53 event -- scripts/common.sh@365 -- # decimal 1 00:10:55.982 17:27:53 event -- scripts/common.sh@353 -- # local d=1 00:10:55.982 17:27:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.982 17:27:53 event -- scripts/common.sh@355 -- # echo 1 00:10:55.982 17:27:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.982 17:27:53 event -- scripts/common.sh@366 -- # decimal 2 00:10:55.982 17:27:53 event -- scripts/common.sh@353 -- # local d=2 00:10:55.982 17:27:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.982 17:27:53 event -- scripts/common.sh@355 -- # echo 2 00:10:55.982 17:27:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.982 17:27:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.982 17:27:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.982 17:27:53 event -- scripts/common.sh@368 -- # return 0 00:10:55.982 17:27:53 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.982 17:27:53 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.983 --rc genhtml_branch_coverage=1 00:10:55.983 --rc genhtml_function_coverage=1 00:10:55.983 --rc genhtml_legend=1 00:10:55.983 --rc geninfo_all_blocks=1 00:10:55.983 --rc geninfo_unexecuted_blocks=1 00:10:55.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:55.983 ' 00:10:55.983 17:27:53 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.983 --rc genhtml_branch_coverage=1 00:10:55.983 --rc genhtml_function_coverage=1 00:10:55.983 --rc genhtml_legend=1 00:10:55.983 --rc geninfo_all_blocks=1 00:10:55.983 --rc geninfo_unexecuted_blocks=1 00:10:55.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:55.983 ' 00:10:55.983 17:27:53 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.983 --rc genhtml_branch_coverage=1 00:10:55.983 --rc genhtml_function_coverage=1 00:10:55.983 --rc genhtml_legend=1 00:10:55.983 --rc geninfo_all_blocks=1 00:10:55.983 --rc geninfo_unexecuted_blocks=1 00:10:55.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:55.983 ' 00:10:55.983 17:27:53 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.983 --rc genhtml_branch_coverage=1 00:10:55.983 --rc genhtml_function_coverage=1 00:10:55.983 --rc genhtml_legend=1 00:10:55.983 --rc geninfo_all_blocks=1 00:10:55.983 --rc geninfo_unexecuted_blocks=1 00:10:55.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:55.983 ' 00:10:55.983 17:27:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:55.983 17:27:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:55.983 17:27:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:55.983 17:27:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:55.983 17:27:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.983 17:27:53 event -- common/autotest_common.sh@10 -- # set +x 00:10:56.241 ************************************ 00:10:56.241 START TEST event_perf 00:10:56.241 ************************************ 00:10:56.241 17:27:53 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:56.241 Running I/O for 1 seconds...[2024-10-14 17:27:53.092536] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:56.241 [2024-10-14 17:27:53.092636] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096271 ] 00:10:56.241 [2024-10-14 17:27:53.177241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.241 [2024-10-14 17:27:53.225089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.241 [2024-10-14 17:27:53.225128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.241 [2024-10-14 17:27:53.225226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.241 [2024-10-14 17:27:53.225227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.177 Running I/O for 1 seconds... 00:10:57.177 lcore 0: 188682 00:10:57.177 lcore 1: 188682 00:10:57.177 lcore 2: 188683 00:10:57.177 lcore 3: 188681 00:10:57.177 done. 00:10:57.177 00:10:57.177 real 0m1.194s 00:10:57.177 user 0m4.103s 00:10:57.177 sys 0m0.088s 00:10:57.436 17:27:54 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.436 17:27:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:57.436 ************************************ 00:10:57.436 END TEST event_perf 00:10:57.436 ************************************ 00:10:57.436 17:27:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:57.436 17:27:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.436 17:27:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.436 17:27:54 event -- common/autotest_common.sh@10 -- # set +x 00:10:57.436 ************************************ 00:10:57.436 START TEST event_reactor 00:10:57.436 ************************************ 00:10:57.436 17:27:54 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:57.436 [2024-10-14 17:27:54.373758] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:57.436 [2024-10-14 17:27:54.373836] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096467 ] 00:10:57.436 [2024-10-14 17:27:54.457682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.436 [2024-10-14 17:27:54.504804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.814 test_start 00:10:58.814 oneshot 00:10:58.814 tick 100 00:10:58.814 tick 100 00:10:58.814 tick 250 00:10:58.814 tick 100 00:10:58.814 tick 100 00:10:58.814 tick 100 00:10:58.814 tick 250 00:10:58.814 tick 500 00:10:58.814 tick 100 00:10:58.814 tick 100 00:10:58.814 tick 250 00:10:58.814 tick 100 00:10:58.814 tick 100 00:10:58.815 test_end 00:10:58.815 00:10:58.815 real 0m1.189s 00:10:58.815 user 0m1.092s 00:10:58.815 sys 0m0.092s 00:10:58.815 17:27:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.815 17:27:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:58.815 ************************************ 00:10:58.815 END TEST event_reactor 00:10:58.815 ************************************ 00:10:58.815 17:27:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:58.815 17:27:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.815 17:27:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.815 17:27:55 event -- common/autotest_common.sh@10 -- # set +x 00:10:58.815 ************************************ 00:10:58.815 START TEST event_reactor_perf 00:10:58.815 ************************************ 00:10:58.815 17:27:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:58.815 [2024-10-14 17:27:55.644158] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:10:58.815 [2024-10-14 17:27:55.644242] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096662 ] 00:10:58.815 [2024-10-14 17:27:55.729297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.815 [2024-10-14 17:27:55.776231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.754 test_start 00:10:59.754 test_end 00:10:59.754 Performance: 952086 events per second 00:10:59.754 00:10:59.754 real 0m1.191s 00:10:59.754 user 0m1.092s 00:10:59.754 sys 0m0.095s 00:10:59.754 17:27:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.754 17:27:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:59.754 ************************************ 00:10:59.754 END TEST event_reactor_perf 00:10:59.754 ************************************ 00:11:00.014 17:27:56 event -- event/event.sh@49 -- # uname -s 00:11:00.014 17:27:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:00.014 17:27:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:11:00.014 17:27:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:00.014 17:27:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.014 17:27:56 event -- common/autotest_common.sh@10 -- # set +x 00:11:00.014 ************************************ 00:11:00.014 START TEST event_scheduler 00:11:00.014 ************************************ 00:11:00.014 17:27:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:11:00.014 * Looking for test storage... 00:11:00.014 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.014 17:27:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.014 --rc genhtml_branch_coverage=1 00:11:00.014 --rc genhtml_function_coverage=1 00:11:00.014 --rc genhtml_legend=1 00:11:00.014 --rc geninfo_all_blocks=1 00:11:00.014 --rc geninfo_unexecuted_blocks=1 00:11:00.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:00.014 ' 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.014 --rc genhtml_branch_coverage=1 00:11:00.014 --rc genhtml_function_coverage=1 00:11:00.014 --rc genhtml_legend=1 00:11:00.014 --rc geninfo_all_blocks=1 00:11:00.014 --rc geninfo_unexecuted_blocks=1 00:11:00.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:00.014 ' 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.014 --rc genhtml_branch_coverage=1 00:11:00.014 --rc genhtml_function_coverage=1 00:11:00.014 --rc genhtml_legend=1 00:11:00.014 --rc geninfo_all_blocks=1 00:11:00.014 --rc geninfo_unexecuted_blocks=1 00:11:00.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:00.014 ' 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:00.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.014 --rc genhtml_branch_coverage=1 00:11:00.014 --rc genhtml_function_coverage=1 00:11:00.014 --rc genhtml_legend=1 00:11:00.014 --rc geninfo_all_blocks=1 00:11:00.014 --rc geninfo_unexecuted_blocks=1 00:11:00.014 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:00.014 ' 00:11:00.014 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:00.014 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2096894 00:11:00.014 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:00.014 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:00.014 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2096894 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2096894 ']' 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.014 17:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:00.274 [2024-10-14 17:27:57.124588] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:00.274 [2024-10-14 17:27:57.124669] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096894 ] 00:11:00.274 [2024-10-14 17:27:57.210351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.274 [2024-10-14 17:27:57.260242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.274 [2024-10-14 17:27:57.260341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.274 [2024-10-14 17:27:57.260425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.274 [2024-10-14 17:27:57.260426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:11:00.274 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:00.274 [2024-10-14 17:27:57.305133] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:11:00.274 [2024-10-14 17:27:57.305155] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:00.274 [2024-10-14 17:27:57.305166] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:00.274 [2024-10-14 17:27:57.305174] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:00.274 [2024-10-14 17:27:57.305181] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.274 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.274 17:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 [2024-10-14 17:27:57.379290] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:00.534 17:27:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:00.534 17:27:57 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:00.534 17:27:57 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 ************************************ 00:11:00.534 START TEST scheduler_create_thread 00:11:00.534 ************************************ 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 2 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 3 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 4 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 5 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 6 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 7 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 8 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 9 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 10 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.535 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:00.535 17:27:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:00.535 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.535 17:27:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:01.470 17:27:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.470 17:27:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:01.470 17:27:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.470 17:27:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:02.887 17:27:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.887 17:27:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:02.887 17:27:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:02.887 17:27:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.887 17:27:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:03.824 17:28:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.824 00:11:03.824 real 0m3.382s 00:11:03.824 user 0m0.024s 00:11:03.824 sys 0m0.007s 00:11:03.824 17:28:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.824 17:28:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:03.824 ************************************ 00:11:03.824 END TEST scheduler_create_thread 00:11:03.824 ************************************ 00:11:03.824 17:28:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:03.824 17:28:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2096894 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2096894 ']' 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2096894 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2096894 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2096894' 00:11:03.824 killing process with pid 2096894 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2096894 00:11:03.824 17:28:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2096894 00:11:04.392 [2024-10-14 17:28:01.183485] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:04.392 00:11:04.392 real 0m4.476s 00:11:04.392 user 0m7.758s 00:11:04.392 sys 0m0.464s 00:11:04.392 17:28:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.392 17:28:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:04.393 ************************************ 00:11:04.393 END TEST event_scheduler 00:11:04.393 ************************************ 00:11:04.393 17:28:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:04.393 17:28:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:04.393 17:28:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:04.393 17:28:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.393 17:28:01 event -- common/autotest_common.sh@10 -- # set +x 00:11:04.393 ************************************ 00:11:04.393 START TEST app_repeat 00:11:04.393 ************************************ 00:11:04.393 17:28:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:04.393 17:28:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2097477 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2097477' 00:11:04.652 Process app_repeat pid: 2097477 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:04.652 spdk_app_start Round 0 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2097477 /var/tmp/spdk-nbd.sock 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2097477 ']' 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:04.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:04.652 [2024-10-14 17:28:01.508227] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:04.652 [2024-10-14 17:28:01.508312] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097477 ] 00:11:04.652 [2024-10-14 17:28:01.591721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:04.652 [2024-10-14 17:28:01.637024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.652 [2024-10-14 17:28:01.637024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.652 17:28:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:04.652 17:28:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:04.911 Malloc0 00:11:04.911 17:28:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.170 Malloc1 00:11:05.170 17:28:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.170 17:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:05.429 /dev/nbd0 00:11:05.429 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:05.429 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:05.429 1+0 records in 00:11:05.429 1+0 records out 00:11:05.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263013 s, 15.6 MB/s 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:05.429 17:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:05.429 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.429 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.429 17:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:05.687 /dev/nbd1 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:05.687 1+0 records in 00:11:05.687 1+0 records out 00:11:05.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028822 s, 14.2 MB/s 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:05.687 17:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.687 17:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.946 17:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:05.946 { 00:11:05.946 "nbd_device": "/dev/nbd0", 00:11:05.946 "bdev_name": "Malloc0" 00:11:05.946 }, 00:11:05.946 { 00:11:05.946 "nbd_device": "/dev/nbd1", 00:11:05.946 "bdev_name": "Malloc1" 00:11:05.946 } 00:11:05.946 ]' 00:11:05.946 17:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:05.946 { 00:11:05.946 "nbd_device": "/dev/nbd0", 00:11:05.946 "bdev_name": "Malloc0" 00:11:05.946 }, 00:11:05.946 { 00:11:05.946 "nbd_device": "/dev/nbd1", 00:11:05.946 "bdev_name": "Malloc1" 00:11:05.946 } 00:11:05.946 ]' 00:11:05.946 17:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.946 17:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:05.947 /dev/nbd1' 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:05.947 /dev/nbd1' 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:05.947 256+0 records in 00:11:05.947 256+0 records out 00:11:05.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108083 s, 97.0 MB/s 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:05.947 256+0 records in 00:11:05.947 256+0 records out 00:11:05.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197383 s, 53.1 MB/s 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.947 17:28:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:05.947 256+0 records in 00:11:05.947 256+0 records out 00:11:05.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216967 s, 48.3 MB/s 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.947 17:28:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.206 17:28:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.466 17:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:06.725 17:28:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:06.725 17:28:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:06.984 17:28:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:07.244 [2024-10-14 17:28:04.089792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:07.244 [2024-10-14 17:28:04.132919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.244 [2024-10-14 17:28:04.132919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.244 [2024-10-14 17:28:04.174573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:07.244 [2024-10-14 17:28:04.174616] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:10.536 17:28:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:10.536 17:28:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:10.536 spdk_app_start Round 1 00:11:10.536 17:28:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2097477 /var/tmp/spdk-nbd.sock 00:11:10.536 17:28:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2097477 ']' 00:11:10.536 17:28:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:10.536 17:28:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.536 17:28:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:10.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:10.536 17:28:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.536 17:28:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:10.536 17:28:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.536 17:28:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:10.536 17:28:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:10.536 Malloc0 00:11:10.536 17:28:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:10.536 Malloc1 00:11:10.536 17:28:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:10.536 17:28:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:10.796 /dev/nbd0 00:11:10.796 17:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:10.796 17:28:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:10.796 1+0 records in 00:11:10.796 1+0 records out 00:11:10.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230561 s, 17.8 MB/s 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.796 17:28:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:10.796 17:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.796 17:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:10.796 17:28:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:11.054 /dev/nbd1 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:11.054 1+0 records in 00:11:11.054 1+0 records out 00:11:11.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264539 s, 15.5 MB/s 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:11.054 17:28:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.054 17:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:11.313 17:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:11.313 { 00:11:11.313 "nbd_device": "/dev/nbd0", 00:11:11.313 "bdev_name": "Malloc0" 00:11:11.313 }, 00:11:11.313 { 00:11:11.313 "nbd_device": "/dev/nbd1", 00:11:11.313 "bdev_name": "Malloc1" 00:11:11.313 } 00:11:11.313 ]' 00:11:11.313 17:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:11.313 { 00:11:11.313 "nbd_device": "/dev/nbd0", 00:11:11.313 "bdev_name": "Malloc0" 00:11:11.313 }, 00:11:11.313 { 00:11:11.313 "nbd_device": "/dev/nbd1", 00:11:11.313 "bdev_name": "Malloc1" 00:11:11.313 } 00:11:11.313 ]' 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:11.314 /dev/nbd1' 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:11.314 /dev/nbd1' 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:11.314 256+0 records in 00:11:11.314 256+0 records out 00:11:11.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107873 s, 97.2 MB/s 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:11.314 256+0 records in 00:11:11.314 256+0 records out 00:11:11.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200393 s, 52.3 MB/s 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:11.314 17:28:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:11.573 256+0 records in 00:11:11.573 256+0 records out 00:11:11.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220335 s, 47.6 MB/s 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.573 17:28:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.833 17:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:12.092 17:28:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:12.092 17:28:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:12.351 17:28:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:12.610 [2024-10-14 17:28:09.497960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:12.610 [2024-10-14 17:28:09.540757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.610 [2024-10-14 17:28:09.540758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.610 [2024-10-14 17:28:09.583845] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:12.610 [2024-10-14 17:28:09.583890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:15.902 17:28:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:15.902 17:28:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:15.902 spdk_app_start Round 2 00:11:15.902 17:28:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2097477 /var/tmp/spdk-nbd.sock 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2097477 ']' 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:15.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.902 17:28:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:15.902 17:28:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:15.902 Malloc0 00:11:15.902 17:28:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:15.902 Malloc1 00:11:15.902 17:28:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.902 17:28:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:16.241 /dev/nbd0 00:11:16.241 17:28:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:16.241 17:28:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.241 17:28:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:16.241 1+0 records in 00:11:16.241 1+0 records out 00:11:16.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257224 s, 15.9 MB/s 00:11:16.242 17:28:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:16.242 17:28:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:16.242 17:28:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:16.242 17:28:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.242 17:28:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:16.242 17:28:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.242 17:28:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:16.242 17:28:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:16.606 /dev/nbd1 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:16.606 1+0 records in 00:11:16.606 1+0 records out 00:11:16.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243031 s, 16.9 MB/s 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.606 17:28:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.606 17:28:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:16.866 { 00:11:16.866 "nbd_device": "/dev/nbd0", 00:11:16.866 "bdev_name": "Malloc0" 00:11:16.866 }, 00:11:16.866 { 00:11:16.866 "nbd_device": "/dev/nbd1", 00:11:16.866 "bdev_name": "Malloc1" 00:11:16.866 } 00:11:16.866 ]' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:16.866 { 00:11:16.866 "nbd_device": "/dev/nbd0", 00:11:16.866 "bdev_name": "Malloc0" 00:11:16.866 }, 00:11:16.866 { 00:11:16.866 "nbd_device": "/dev/nbd1", 00:11:16.866 "bdev_name": "Malloc1" 00:11:16.866 } 00:11:16.866 ]' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:16.866 /dev/nbd1' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:16.866 /dev/nbd1' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:16.866 256+0 records in 00:11:16.866 256+0 records out 00:11:16.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00352669 s, 297 MB/s 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:16.866 256+0 records in 00:11:16.866 256+0 records out 00:11:16.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02059 s, 50.9 MB/s 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:16.866 256+0 records in 00:11:16.866 256+0 records out 00:11:16.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217968 s, 48.1 MB/s 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:16.866 17:28:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.867 17:28:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:17.125 17:28:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:17.125 17:28:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:17.385 17:28:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:17.644 17:28:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:17.644 17:28:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:17.644 17:28:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:17.903 [2024-10-14 17:28:14.848970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.903 [2024-10-14 17:28:14.891814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.903 [2024-10-14 17:28:14.891815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.903 [2024-10-14 17:28:14.933854] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:17.903 [2024-10-14 17:28:14.933898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:21.191 17:28:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2097477 /var/tmp/spdk-nbd.sock 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2097477 ']' 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:21.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:21.191 17:28:17 event.app_repeat -- event/event.sh@39 -- # killprocess 2097477 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2097477 ']' 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2097477 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2097477 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2097477' 00:11:21.191 killing process with pid 2097477 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2097477 00:11:21.191 17:28:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2097477 00:11:21.191 spdk_app_start is called in Round 0. 00:11:21.191 Shutdown signal received, stop current app iteration 00:11:21.191 Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 reinitialization... 00:11:21.191 spdk_app_start is called in Round 1. 00:11:21.191 Shutdown signal received, stop current app iteration 00:11:21.191 Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 reinitialization... 00:11:21.191 spdk_app_start is called in Round 2. 00:11:21.191 Shutdown signal received, stop current app iteration 00:11:21.191 Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 reinitialization... 00:11:21.191 spdk_app_start is called in Round 3. 00:11:21.191 Shutdown signal received, stop current app iteration 00:11:21.191 17:28:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:21.191 17:28:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:21.191 00:11:21.191 real 0m16.625s 00:11:21.191 user 0m35.988s 00:11:21.191 sys 0m3.226s 00:11:21.191 17:28:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.191 17:28:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:21.191 ************************************ 00:11:21.191 END TEST app_repeat 00:11:21.191 ************************************ 00:11:21.191 17:28:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:21.191 17:28:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:21.191 17:28:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:21.191 17:28:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.191 17:28:18 event -- common/autotest_common.sh@10 -- # set +x 00:11:21.191 ************************************ 00:11:21.191 START TEST cpu_locks 00:11:21.191 ************************************ 00:11:21.191 17:28:18 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:21.451 * Looking for test storage... 00:11:21.451 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:11:21.451 17:28:18 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:21.451 17:28:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:11:21.451 17:28:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:21.451 17:28:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.451 17:28:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:21.451 17:28:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.451 17:28:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:21.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.451 --rc genhtml_branch_coverage=1 00:11:21.451 --rc genhtml_function_coverage=1 00:11:21.452 --rc genhtml_legend=1 00:11:21.452 --rc geninfo_all_blocks=1 00:11:21.452 --rc geninfo_unexecuted_blocks=1 00:11:21.452 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:21.452 ' 00:11:21.452 17:28:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.452 --rc genhtml_branch_coverage=1 00:11:21.452 --rc genhtml_function_coverage=1 00:11:21.452 --rc genhtml_legend=1 00:11:21.452 --rc geninfo_all_blocks=1 00:11:21.452 --rc geninfo_unexecuted_blocks=1 00:11:21.452 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:21.452 ' 00:11:21.452 17:28:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.452 --rc genhtml_branch_coverage=1 00:11:21.452 --rc genhtml_function_coverage=1 00:11:21.452 --rc genhtml_legend=1 00:11:21.452 --rc geninfo_all_blocks=1 00:11:21.452 --rc geninfo_unexecuted_blocks=1 00:11:21.452 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:21.452 ' 00:11:21.452 17:28:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:21.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.452 --rc genhtml_branch_coverage=1 00:11:21.452 --rc genhtml_function_coverage=1 00:11:21.452 --rc genhtml_legend=1 00:11:21.452 --rc geninfo_all_blocks=1 00:11:21.452 --rc geninfo_unexecuted_blocks=1 00:11:21.452 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:21.452 ' 00:11:21.452 17:28:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:21.452 17:28:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:21.452 17:28:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:21.452 17:28:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:21.452 17:28:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:21.452 17:28:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.452 17:28:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 ************************************ 00:11:21.452 START TEST default_locks 00:11:21.452 ************************************ 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2099989 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2099989 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2099989 ']' 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.452 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 [2024-10-14 17:28:18.448197] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:21.452 [2024-10-14 17:28:18.448257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099989 ] 00:11:21.452 [2024-10-14 17:28:18.528098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.712 [2024-10-14 17:28:18.576826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.712 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.712 17:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:11:21.712 17:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2099989 00:11:21.712 17:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2099989 00:11:21.712 17:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:21.971 lslocks: write error 00:11:21.971 17:28:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2099989 00:11:21.971 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2099989 ']' 00:11:21.971 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2099989 00:11:21.971 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:11:21.971 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.971 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099989 00:11:22.230 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.230 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.230 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099989' 00:11:22.230 killing process with pid 2099989 00:11:22.230 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2099989 00:11:22.230 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2099989 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2099989 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2099989 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2099989 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2099989 ']' 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2099989) - No such process 00:11:22.490 ERROR: process (pid: 2099989) is no longer running 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:22.490 00:11:22.490 real 0m0.984s 00:11:22.490 user 0m0.921s 00:11:22.490 sys 0m0.508s 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.490 17:28:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 ************************************ 00:11:22.490 END TEST default_locks 00:11:22.490 ************************************ 00:11:22.490 17:28:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:22.490 17:28:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:22.490 17:28:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.490 17:28:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 ************************************ 00:11:22.490 START TEST default_locks_via_rpc 00:11:22.490 ************************************ 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2100127 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2100127 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2100127 ']' 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.490 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 [2024-10-14 17:28:19.511936] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:22.490 [2024-10-14 17:28:19.512000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100127 ] 00:11:22.750 [2024-10-14 17:28:19.596535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.750 [2024-10-14 17:28:19.643596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2100127 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2100127 00:11:23.009 17:28:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2100127 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2100127 ']' 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2100127 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100127 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100127' 00:11:23.577 killing process with pid 2100127 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2100127 00:11:23.577 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2100127 00:11:23.837 00:11:23.837 real 0m1.258s 00:11:23.837 user 0m1.213s 00:11:23.837 sys 0m0.612s 00:11:23.837 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.837 17:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.837 ************************************ 00:11:23.837 END TEST default_locks_via_rpc 00:11:23.837 ************************************ 00:11:23.837 17:28:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:23.837 17:28:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:23.837 17:28:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.837 17:28:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.837 ************************************ 00:11:23.837 START TEST non_locking_app_on_locked_coremask 00:11:23.837 ************************************ 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2100325 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2100325 /var/tmp/spdk.sock 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2100325 ']' 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.837 17:28:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:23.837 [2024-10-14 17:28:20.843650] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:23.837 [2024-10-14 17:28:20.843713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100325 ] 00:11:23.837 [2024-10-14 17:28:20.924993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.096 [2024-10-14 17:28:20.973273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.096 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.096 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:24.096 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2100409 00:11:24.096 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2100409 /var/tmp/spdk2.sock 00:11:24.096 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:24.355 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2100409 ']' 00:11:24.355 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:24.355 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.355 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:24.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:24.355 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.355 17:28:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.355 [2024-10-14 17:28:21.211947] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:24.355 [2024-10-14 17:28:21.212016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100409 ] 00:11:24.355 [2024-10-14 17:28:21.300048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:24.355 [2024-10-14 17:28:21.300074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.355 [2024-10-14 17:28:21.390451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.291 17:28:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.291 17:28:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:25.291 17:28:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2100325 00:11:25.291 17:28:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2100325 00:11:25.291 17:28:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:26.228 lslocks: write error 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2100325 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2100325 ']' 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2100325 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100325 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100325' 00:11:26.228 killing process with pid 2100325 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2100325 00:11:26.228 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2100325 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2100409 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2100409 ']' 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2100409 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100409 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100409' 00:11:26.796 killing process with pid 2100409 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2100409 00:11:26.796 17:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2100409 00:11:27.056 00:11:27.056 real 0m3.247s 00:11:27.056 user 0m3.391s 00:11:27.056 sys 0m1.179s 00:11:27.056 17:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.056 17:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:27.056 ************************************ 00:11:27.056 END TEST non_locking_app_on_locked_coremask 00:11:27.056 ************************************ 00:11:27.056 17:28:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:27.056 17:28:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:27.056 17:28:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.056 17:28:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:27.315 ************************************ 00:11:27.315 START TEST locking_app_on_unlocked_coremask 00:11:27.315 ************************************ 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2100796 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2100796 /var/tmp/spdk.sock 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2100796 ']' 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.315 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:27.315 [2024-10-14 17:28:24.178432] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:27.315 [2024-10-14 17:28:24.178493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100796 ] 00:11:27.315 [2024-10-14 17:28:24.241836] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:27.315 [2024-10-14 17:28:24.241870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.315 [2024-10-14 17:28:24.290346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2100802 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2100802 /var/tmp/spdk2.sock 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2100802 ']' 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:27.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.575 17:28:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:27.575 [2024-10-14 17:28:24.531074] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:27.575 [2024-10-14 17:28:24.531160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100802 ] 00:11:27.575 [2024-10-14 17:28:24.617785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.835 [2024-10-14 17:28:24.713067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.406 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.406 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:28.406 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2100802 00:11:28.406 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2100802 00:11:28.406 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:29.784 lslocks: write error 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2100796 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2100796 ']' 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2100796 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100796 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100796' 00:11:29.784 killing process with pid 2100796 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2100796 00:11:29.784 17:28:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2100796 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2100802 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2100802 ']' 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2100802 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100802 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100802' 00:11:30.353 killing process with pid 2100802 00:11:30.353 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2100802 00:11:30.354 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2100802 00:11:30.922 00:11:30.922 real 0m3.554s 00:11:30.922 user 0m3.760s 00:11:30.922 sys 0m1.270s 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.922 ************************************ 00:11:30.922 END TEST locking_app_on_unlocked_coremask 00:11:30.922 ************************************ 00:11:30.922 17:28:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:30.922 17:28:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:30.922 17:28:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.922 17:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:30.922 ************************************ 00:11:30.922 START TEST locking_app_on_locked_coremask 00:11:30.922 ************************************ 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2101359 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2101359 /var/tmp/spdk.sock 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2101359 ']' 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.922 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.922 [2024-10-14 17:28:27.811777] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:30.922 [2024-10-14 17:28:27.811838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101359 ] 00:11:30.922 [2024-10-14 17:28:27.892048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.922 [2024-10-14 17:28:27.939857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2101364 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2101364 /var/tmp/spdk2.sock 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2101364 /var/tmp/spdk2.sock 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2101364 /var/tmp/spdk2.sock 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2101364 ']' 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:31.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.182 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 [2024-10-14 17:28:28.182098] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:31.182 [2024-10-14 17:28:28.182191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101364 ] 00:11:31.182 [2024-10-14 17:28:28.270675] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2101359 has claimed it. 00:11:31.182 [2024-10-14 17:28:28.270725] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:31.750 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2101364) - No such process 00:11:31.750 ERROR: process (pid: 2101364) is no longer running 00:11:31.750 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.750 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2101359 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2101359 00:11:32.009 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:32.577 lslocks: write error 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2101359 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2101359 ']' 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2101359 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101359 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101359' 00:11:32.577 killing process with pid 2101359 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2101359 00:11:32.577 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2101359 00:11:32.837 00:11:32.837 real 0m2.132s 00:11:32.837 user 0m2.248s 00:11:32.837 sys 0m0.800s 00:11:32.837 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.837 17:28:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:32.837 ************************************ 00:11:32.837 END TEST locking_app_on_locked_coremask 00:11:32.837 ************************************ 00:11:33.096 17:28:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:33.096 17:28:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:33.096 17:28:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.096 17:28:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:33.096 ************************************ 00:11:33.096 START TEST locking_overlapped_coremask 00:11:33.097 ************************************ 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2101580 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2101580 /var/tmp/spdk.sock 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2101580 ']' 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.097 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:33.097 [2024-10-14 17:28:30.029435] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:33.097 [2024-10-14 17:28:30.029496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101580 ] 00:11:33.097 [2024-10-14 17:28:30.111230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.097 [2024-10-14 17:28:30.161153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.097 [2024-10-14 17:28:30.161187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.097 [2024-10-14 17:28:30.161188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.356 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.356 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:33.356 17:28:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:33.356 17:28:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2101743 00:11:33.356 17:28:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2101743 /var/tmp/spdk2.sock 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2101743 /var/tmp/spdk2.sock 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2101743 /var/tmp/spdk2.sock 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2101743 ']' 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:33.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.357 17:28:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:33.357 [2024-10-14 17:28:30.404193] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:33.357 [2024-10-14 17:28:30.404252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101743 ] 00:11:33.616 [2024-10-14 17:28:30.496674] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2101580 has claimed it. 00:11:33.616 [2024-10-14 17:28:30.496717] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:34.185 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2101743) - No such process 00:11:34.186 ERROR: process (pid: 2101743) is no longer running 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2101580 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2101580 ']' 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2101580 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101580 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101580' 00:11:34.186 killing process with pid 2101580 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2101580 00:11:34.186 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2101580 00:11:34.445 00:11:34.445 real 0m1.439s 00:11:34.445 user 0m3.960s 00:11:34.445 sys 0m0.429s 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.445 ************************************ 00:11:34.445 END TEST locking_overlapped_coremask 00:11:34.445 ************************************ 00:11:34.445 17:28:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:34.445 17:28:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:34.445 17:28:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.445 17:28:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.445 ************************************ 00:11:34.445 START TEST locking_overlapped_coremask_via_rpc 00:11:34.445 ************************************ 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2101866 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2101866 /var/tmp/spdk.sock 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2101866 ']' 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.445 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.446 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.446 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.705 [2024-10-14 17:28:31.548182] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:34.705 [2024-10-14 17:28:31.548252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101866 ] 00:11:34.705 [2024-10-14 17:28:31.631765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:34.705 [2024-10-14 17:28:31.631800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:34.705 [2024-10-14 17:28:31.681282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.705 [2024-10-14 17:28:31.681384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.705 [2024-10-14 17:28:31.681385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.964 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.964 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:34.964 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2101965 00:11:34.964 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2101965 /var/tmp/spdk2.sock 00:11:34.964 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:34.965 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2101965 ']' 00:11:34.965 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:34.965 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.965 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:34.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:34.965 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.965 17:28:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.965 [2024-10-14 17:28:31.926024] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:34.965 [2024-10-14 17:28:31.926136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101965 ] 00:11:34.965 [2024-10-14 17:28:32.020526] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:34.965 [2024-10-14 17:28:32.020556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.224 [2024-10-14 17:28:32.117021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.224 [2024-10-14 17:28:32.117107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.224 [2024-10-14 17:28:32.117109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.793 [2024-10-14 17:28:32.786092] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2101866 has claimed it. 00:11:35.793 request: 00:11:35.793 { 00:11:35.793 "method": "framework_enable_cpumask_locks", 00:11:35.793 "req_id": 1 00:11:35.793 } 00:11:35.793 Got JSON-RPC error response 00:11:35.793 response: 00:11:35.793 { 00:11:35.793 "code": -32603, 00:11:35.793 "message": "Failed to claim CPU core: 2" 00:11:35.793 } 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2101866 /var/tmp/spdk.sock 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2101866 ']' 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.793 17:28:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2101965 /var/tmp/spdk2.sock 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2101965 ']' 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:36.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.052 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:36.312 00:11:36.312 real 0m1.707s 00:11:36.312 user 0m0.820s 00:11:36.312 sys 0m0.152s 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.312 17:28:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.312 ************************************ 00:11:36.312 END TEST locking_overlapped_coremask_via_rpc 00:11:36.312 ************************************ 00:11:36.312 17:28:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:36.312 17:28:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2101866 ]] 00:11:36.312 17:28:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2101866 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2101866 ']' 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2101866 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101866 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101866' 00:11:36.312 killing process with pid 2101866 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2101866 00:11:36.312 17:28:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2101866 00:11:36.571 17:28:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2101965 ]] 00:11:36.571 17:28:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2101965 00:11:36.571 17:28:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2101965 ']' 00:11:36.571 17:28:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2101965 00:11:36.571 17:28:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:11:36.571 17:28:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.571 17:28:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101965 00:11:36.831 17:28:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:11:36.831 17:28:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:11:36.831 17:28:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101965' 00:11:36.831 killing process with pid 2101965 00:11:36.831 17:28:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2101965 00:11:36.831 17:28:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2101965 00:11:37.090 17:28:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:37.091 17:28:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:37.091 17:28:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2101866 ]] 00:11:37.091 17:28:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2101866 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2101866 ']' 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2101866 00:11:37.091 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2101866) - No such process 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2101866 is not found' 00:11:37.091 Process with pid 2101866 is not found 00:11:37.091 17:28:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2101965 ]] 00:11:37.091 17:28:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2101965 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2101965 ']' 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2101965 00:11:37.091 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2101965) - No such process 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2101965 is not found' 00:11:37.091 Process with pid 2101965 is not found 00:11:37.091 17:28:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:37.091 00:11:37.091 real 0m15.811s 00:11:37.091 user 0m26.155s 00:11:37.091 sys 0m6.038s 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.091 17:28:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 ************************************ 00:11:37.091 END TEST cpu_locks 00:11:37.091 ************************************ 00:11:37.091 00:11:37.091 real 0m41.198s 00:11:37.091 user 1m16.463s 00:11:37.091 sys 0m10.494s 00:11:37.091 17:28:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.091 17:28:34 event -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 ************************************ 00:11:37.091 END TEST event 00:11:37.091 ************************************ 00:11:37.091 17:28:34 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:11:37.091 17:28:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:37.091 17:28:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.091 17:28:34 -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 ************************************ 00:11:37.091 START TEST thread 00:11:37.091 ************************************ 00:11:37.091 17:28:34 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:11:37.350 * Looking for test storage... 00:11:37.350 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:11:37.350 17:28:34 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.350 17:28:34 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.350 17:28:34 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.350 17:28:34 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.350 17:28:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.351 17:28:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.351 17:28:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.351 17:28:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.351 17:28:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.351 17:28:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.351 17:28:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.351 17:28:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.351 17:28:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.351 17:28:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.351 17:28:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.351 17:28:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:37.351 17:28:34 thread -- scripts/common.sh@345 -- # : 1 00:11:37.351 17:28:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.351 17:28:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.351 17:28:34 thread -- scripts/common.sh@365 -- # decimal 1 00:11:37.351 17:28:34 thread -- scripts/common.sh@353 -- # local d=1 00:11:37.351 17:28:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.351 17:28:34 thread -- scripts/common.sh@355 -- # echo 1 00:11:37.351 17:28:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.351 17:28:34 thread -- scripts/common.sh@366 -- # decimal 2 00:11:37.351 17:28:34 thread -- scripts/common.sh@353 -- # local d=2 00:11:37.351 17:28:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.351 17:28:34 thread -- scripts/common.sh@355 -- # echo 2 00:11:37.351 17:28:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.351 17:28:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.351 17:28:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.351 17:28:34 thread -- scripts/common.sh@368 -- # return 0 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.351 --rc genhtml_branch_coverage=1 00:11:37.351 --rc genhtml_function_coverage=1 00:11:37.351 --rc genhtml_legend=1 00:11:37.351 --rc geninfo_all_blocks=1 00:11:37.351 --rc geninfo_unexecuted_blocks=1 00:11:37.351 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:37.351 ' 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.351 --rc genhtml_branch_coverage=1 00:11:37.351 --rc genhtml_function_coverage=1 00:11:37.351 --rc genhtml_legend=1 00:11:37.351 --rc geninfo_all_blocks=1 00:11:37.351 --rc geninfo_unexecuted_blocks=1 00:11:37.351 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:37.351 ' 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.351 --rc genhtml_branch_coverage=1 00:11:37.351 --rc genhtml_function_coverage=1 00:11:37.351 --rc genhtml_legend=1 00:11:37.351 --rc geninfo_all_blocks=1 00:11:37.351 --rc geninfo_unexecuted_blocks=1 00:11:37.351 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:37.351 ' 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.351 --rc genhtml_branch_coverage=1 00:11:37.351 --rc genhtml_function_coverage=1 00:11:37.351 --rc genhtml_legend=1 00:11:37.351 --rc geninfo_all_blocks=1 00:11:37.351 --rc geninfo_unexecuted_blocks=1 00:11:37.351 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:37.351 ' 00:11:37.351 17:28:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.351 17:28:34 thread -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 ************************************ 00:11:37.351 START TEST thread_poller_perf 00:11:37.351 ************************************ 00:11:37.351 17:28:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:37.351 [2024-10-14 17:28:34.372972] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:37.351 [2024-10-14 17:28:34.373064] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102403 ] 00:11:37.610 [2024-10-14 17:28:34.457967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.610 [2024-10-14 17:28:34.501930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.610 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:38.547 [2024-10-14T15:28:35.639Z] ====================================== 00:11:38.548 [2024-10-14T15:28:35.640Z] busy:2304387448 (cyc) 00:11:38.548 [2024-10-14T15:28:35.640Z] total_run_count: 828000 00:11:38.548 [2024-10-14T15:28:35.640Z] tsc_hz: 2300000000 (cyc) 00:11:38.548 [2024-10-14T15:28:35.640Z] ====================================== 00:11:38.548 [2024-10-14T15:28:35.640Z] poller_cost: 2783 (cyc), 1210 (nsec) 00:11:38.548 00:11:38.548 real 0m1.192s 00:11:38.548 user 0m1.095s 00:11:38.548 sys 0m0.094s 00:11:38.548 17:28:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.548 17:28:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:38.548 ************************************ 00:11:38.548 END TEST thread_poller_perf 00:11:38.548 ************************************ 00:11:38.548 17:28:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:38.548 17:28:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:11:38.548 17:28:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.548 17:28:35 thread -- common/autotest_common.sh@10 -- # set +x 00:11:38.548 ************************************ 00:11:38.548 START TEST thread_poller_perf 00:11:38.548 ************************************ 00:11:38.548 17:28:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:38.807 [2024-10-14 17:28:35.647772] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:38.807 [2024-10-14 17:28:35.647872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102561 ] 00:11:38.807 [2024-10-14 17:28:35.731866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.807 [2024-10-14 17:28:35.776818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.807 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:39.745 [2024-10-14T15:28:36.837Z] ====================================== 00:11:39.745 [2024-10-14T15:28:36.837Z] busy:2301182442 (cyc) 00:11:39.745 [2024-10-14T15:28:36.837Z] total_run_count: 13063000 00:11:39.745 [2024-10-14T15:28:36.837Z] tsc_hz: 2300000000 (cyc) 00:11:39.745 [2024-10-14T15:28:36.837Z] ====================================== 00:11:39.745 [2024-10-14T15:28:36.837Z] poller_cost: 176 (cyc), 76 (nsec) 00:11:39.745 00:11:39.745 real 0m1.187s 00:11:39.745 user 0m1.089s 00:11:39.745 sys 0m0.094s 00:11:39.745 17:28:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.745 17:28:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:39.745 ************************************ 00:11:39.745 END TEST thread_poller_perf 00:11:39.745 ************************************ 00:11:40.006 17:28:36 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:40.006 17:28:36 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:11:40.006 17:28:36 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.006 17:28:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.006 17:28:36 thread -- common/autotest_common.sh@10 -- # set +x 00:11:40.006 ************************************ 00:11:40.006 START TEST thread_spdk_lock 00:11:40.006 ************************************ 00:11:40.006 17:28:36 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:11:40.006 [2024-10-14 17:28:36.912942] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:40.006 [2024-10-14 17:28:36.913049] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102714 ] 00:11:40.006 [2024-10-14 17:28:36.980363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:40.006 [2024-10-14 17:28:37.027956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.006 [2024-10-14 17:28:37.027957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.575 [2024-10-14 17:28:37.521735] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:40.575 [2024-10-14 17:28:37.521773] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:40.575 [2024-10-14 17:28:37.521783] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14d2e00 00:11:40.575 [2024-10-14 17:28:37.522540] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:40.575 [2024-10-14 17:28:37.522645] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:40.575 [2024-10-14 17:28:37.522664] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:40.575 Starting test contend 00:11:40.575 Worker Delay Wait us Hold us Total us 00:11:40.575 0 3 163869 188190 352059 00:11:40.575 1 5 84306 287529 371836 00:11:40.575 PASS test contend 00:11:40.575 Starting test hold_by_poller 00:11:40.575 PASS test hold_by_poller 00:11:40.575 Starting test hold_by_message 00:11:40.575 PASS test hold_by_message 00:11:40.575 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:11:40.575 100014 assertions passed 00:11:40.575 0 assertions failed 00:11:40.575 00:11:40.575 real 0m0.666s 00:11:40.575 user 0m1.072s 00:11:40.575 sys 0m0.085s 00:11:40.575 17:28:37 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.575 17:28:37 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:40.575 ************************************ 00:11:40.575 END TEST thread_spdk_lock 00:11:40.575 ************************************ 00:11:40.575 00:11:40.575 real 0m3.476s 00:11:40.575 user 0m3.438s 00:11:40.575 sys 0m0.557s 00:11:40.575 17:28:37 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.575 17:28:37 thread -- common/autotest_common.sh@10 -- # set +x 00:11:40.575 ************************************ 00:11:40.575 END TEST thread 00:11:40.575 ************************************ 00:11:40.575 17:28:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:40.575 17:28:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:11:40.575 17:28:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.575 17:28:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.575 17:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:40.834 ************************************ 00:11:40.834 START TEST app_cmdline 00:11:40.834 ************************************ 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:11:40.834 * Looking for test storage... 00:11:40.834 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.834 17:28:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.834 --rc genhtml_branch_coverage=1 00:11:40.834 --rc genhtml_function_coverage=1 00:11:40.834 --rc genhtml_legend=1 00:11:40.834 --rc geninfo_all_blocks=1 00:11:40.834 --rc geninfo_unexecuted_blocks=1 00:11:40.834 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:40.834 ' 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.834 --rc genhtml_branch_coverage=1 00:11:40.834 --rc genhtml_function_coverage=1 00:11:40.834 --rc genhtml_legend=1 00:11:40.834 --rc geninfo_all_blocks=1 00:11:40.834 --rc geninfo_unexecuted_blocks=1 00:11:40.834 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:40.834 ' 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.834 --rc genhtml_branch_coverage=1 00:11:40.834 --rc genhtml_function_coverage=1 00:11:40.834 --rc genhtml_legend=1 00:11:40.834 --rc geninfo_all_blocks=1 00:11:40.834 --rc geninfo_unexecuted_blocks=1 00:11:40.834 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:40.834 ' 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.834 --rc genhtml_branch_coverage=1 00:11:40.834 --rc genhtml_function_coverage=1 00:11:40.834 --rc genhtml_legend=1 00:11:40.834 --rc geninfo_all_blocks=1 00:11:40.834 --rc geninfo_unexecuted_blocks=1 00:11:40.834 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:40.834 ' 00:11:40.834 17:28:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:40.834 17:28:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2102885 00:11:40.834 17:28:37 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:40.834 17:28:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2102885 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2102885 ']' 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.834 17:28:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:40.834 [2024-10-14 17:28:37.909771] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:40.834 [2024-10-14 17:28:37.909863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102885 ] 00:11:41.092 [2024-10-14 17:28:37.974089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.092 [2024-10-14 17:28:38.022503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.351 17:28:38 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.351 17:28:38 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:41.351 { 00:11:41.351 "version": "SPDK v25.01-pre git sha1 f1e77dead", 00:11:41.351 "fields": { 00:11:41.351 "major": 25, 00:11:41.351 "minor": 1, 00:11:41.351 "patch": 0, 00:11:41.351 "suffix": "-pre", 00:11:41.351 "commit": "f1e77dead" 00:11:41.351 } 00:11:41.351 } 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:41.351 17:28:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.351 17:28:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:41.351 17:28:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.609 17:28:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:41.609 17:28:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:41.609 17:28:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:11:41.609 17:28:38 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:41.609 request: 00:11:41.610 { 00:11:41.610 "method": "env_dpdk_get_mem_stats", 00:11:41.610 "req_id": 1 00:11:41.610 } 00:11:41.610 Got JSON-RPC error response 00:11:41.610 response: 00:11:41.610 { 00:11:41.610 "code": -32601, 00:11:41.610 "message": "Method not found" 00:11:41.610 } 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.610 17:28:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2102885 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2102885 ']' 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2102885 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.610 17:28:38 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102885 00:11:41.869 17:28:38 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.869 17:28:38 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.869 17:28:38 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102885' 00:11:41.869 killing process with pid 2102885 00:11:41.869 17:28:38 app_cmdline -- common/autotest_common.sh@969 -- # kill 2102885 00:11:41.869 17:28:38 app_cmdline -- common/autotest_common.sh@974 -- # wait 2102885 00:11:42.128 00:11:42.128 real 0m1.345s 00:11:42.128 user 0m1.531s 00:11:42.128 sys 0m0.491s 00:11:42.128 17:28:39 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.128 17:28:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:42.128 ************************************ 00:11:42.128 END TEST app_cmdline 00:11:42.128 ************************************ 00:11:42.128 17:28:39 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:11:42.128 17:28:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:42.128 17:28:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.128 17:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.128 ************************************ 00:11:42.128 START TEST version 00:11:42.128 ************************************ 00:11:42.128 17:28:39 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:11:42.128 * Looking for test storage... 00:11:42.128 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:11:42.128 17:28:39 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.128 17:28:39 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.128 17:28:39 version -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.388 17:28:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.388 17:28:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.388 17:28:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.388 17:28:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.388 17:28:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.388 17:28:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.388 17:28:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.388 17:28:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.388 17:28:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.388 17:28:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.388 17:28:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.388 17:28:39 version -- scripts/common.sh@344 -- # case "$op" in 00:11:42.388 17:28:39 version -- scripts/common.sh@345 -- # : 1 00:11:42.388 17:28:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.388 17:28:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.388 17:28:39 version -- scripts/common.sh@365 -- # decimal 1 00:11:42.388 17:28:39 version -- scripts/common.sh@353 -- # local d=1 00:11:42.388 17:28:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.388 17:28:39 version -- scripts/common.sh@355 -- # echo 1 00:11:42.388 17:28:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.388 17:28:39 version -- scripts/common.sh@366 -- # decimal 2 00:11:42.388 17:28:39 version -- scripts/common.sh@353 -- # local d=2 00:11:42.388 17:28:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.388 17:28:39 version -- scripts/common.sh@355 -- # echo 2 00:11:42.388 17:28:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.388 17:28:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.388 17:28:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.388 17:28:39 version -- scripts/common.sh@368 -- # return 0 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.388 --rc genhtml_branch_coverage=1 00:11:42.388 --rc genhtml_function_coverage=1 00:11:42.388 --rc genhtml_legend=1 00:11:42.388 --rc geninfo_all_blocks=1 00:11:42.388 --rc geninfo_unexecuted_blocks=1 00:11:42.388 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.388 ' 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.388 --rc genhtml_branch_coverage=1 00:11:42.388 --rc genhtml_function_coverage=1 00:11:42.388 --rc genhtml_legend=1 00:11:42.388 --rc geninfo_all_blocks=1 00:11:42.388 --rc geninfo_unexecuted_blocks=1 00:11:42.388 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.388 ' 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.388 --rc genhtml_branch_coverage=1 00:11:42.388 --rc genhtml_function_coverage=1 00:11:42.388 --rc genhtml_legend=1 00:11:42.388 --rc geninfo_all_blocks=1 00:11:42.388 --rc geninfo_unexecuted_blocks=1 00:11:42.388 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.388 ' 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.388 --rc genhtml_branch_coverage=1 00:11:42.388 --rc genhtml_function_coverage=1 00:11:42.388 --rc genhtml_legend=1 00:11:42.388 --rc geninfo_all_blocks=1 00:11:42.388 --rc geninfo_unexecuted_blocks=1 00:11:42.388 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.388 ' 00:11:42.388 17:28:39 version -- app/version.sh@17 -- # get_header_version major 00:11:42.388 17:28:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # cut -f2 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.388 17:28:39 version -- app/version.sh@17 -- # major=25 00:11:42.388 17:28:39 version -- app/version.sh@18 -- # get_header_version minor 00:11:42.388 17:28:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # cut -f2 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.388 17:28:39 version -- app/version.sh@18 -- # minor=1 00:11:42.388 17:28:39 version -- app/version.sh@19 -- # get_header_version patch 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.388 17:28:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # cut -f2 00:11:42.388 17:28:39 version -- app/version.sh@19 -- # patch=0 00:11:42.388 17:28:39 version -- app/version.sh@20 -- # get_header_version suffix 00:11:42.388 17:28:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # cut -f2 00:11:42.388 17:28:39 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.388 17:28:39 version -- app/version.sh@20 -- # suffix=-pre 00:11:42.388 17:28:39 version -- app/version.sh@22 -- # version=25.1 00:11:42.388 17:28:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:42.388 17:28:39 version -- app/version.sh@28 -- # version=25.1rc0 00:11:42.388 17:28:39 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:11:42.388 17:28:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:42.388 17:28:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:42.388 17:28:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:42.388 00:11:42.388 real 0m0.273s 00:11:42.388 user 0m0.150s 00:11:42.388 sys 0m0.170s 00:11:42.388 17:28:39 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.388 17:28:39 version -- common/autotest_common.sh@10 -- # set +x 00:11:42.388 ************************************ 00:11:42.388 END TEST version 00:11:42.388 ************************************ 00:11:42.388 17:28:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:42.388 17:28:39 -- spdk/autotest.sh@194 -- # uname -s 00:11:42.388 17:28:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:42.388 17:28:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:42.388 17:28:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:42.388 17:28:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@256 -- # timing_exit lib 00:11:42.388 17:28:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.388 17:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.388 17:28:39 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:11:42.388 17:28:39 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:11:42.648 17:28:39 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:11:42.648 17:28:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:11:42.648 17:28:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:11:42.648 17:28:39 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:11:42.648 17:28:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:11:42.648 17:28:39 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:11:42.648 17:28:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:11:42.648 17:28:39 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:11:42.648 17:28:39 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:11:42.648 17:28:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:42.648 17:28:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.648 17:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.648 ************************************ 00:11:42.648 START TEST llvm_fuzz 00:11:42.648 ************************************ 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:11:42.648 * Looking for test storage... 00:11:42.648 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.648 17:28:39 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.648 --rc genhtml_branch_coverage=1 00:11:42.648 --rc genhtml_function_coverage=1 00:11:42.648 --rc genhtml_legend=1 00:11:42.648 --rc geninfo_all_blocks=1 00:11:42.648 --rc geninfo_unexecuted_blocks=1 00:11:42.648 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.648 ' 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.648 --rc genhtml_branch_coverage=1 00:11:42.648 --rc genhtml_function_coverage=1 00:11:42.648 --rc genhtml_legend=1 00:11:42.648 --rc geninfo_all_blocks=1 00:11:42.648 --rc geninfo_unexecuted_blocks=1 00:11:42.648 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.648 ' 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.648 --rc genhtml_branch_coverage=1 00:11:42.648 --rc genhtml_function_coverage=1 00:11:42.648 --rc genhtml_legend=1 00:11:42.648 --rc geninfo_all_blocks=1 00:11:42.648 --rc geninfo_unexecuted_blocks=1 00:11:42.648 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.648 ' 00:11:42.648 17:28:39 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.648 --rc genhtml_branch_coverage=1 00:11:42.648 --rc genhtml_function_coverage=1 00:11:42.648 --rc genhtml_legend=1 00:11:42.648 --rc geninfo_all_blocks=1 00:11:42.649 --rc geninfo_unexecuted_blocks=1 00:11:42.649 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.649 ' 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:11:42.649 17:28:39 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.649 17:28:39 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:42.910 ************************************ 00:11:42.910 START TEST nvmf_llvm_fuzz 00:11:42.910 ************************************ 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:11:42.910 * Looking for test storage... 00:11:42.910 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.910 --rc genhtml_branch_coverage=1 00:11:42.910 --rc genhtml_function_coverage=1 00:11:42.910 --rc genhtml_legend=1 00:11:42.910 --rc geninfo_all_blocks=1 00:11:42.910 --rc geninfo_unexecuted_blocks=1 00:11:42.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.910 ' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.910 --rc genhtml_branch_coverage=1 00:11:42.910 --rc genhtml_function_coverage=1 00:11:42.910 --rc genhtml_legend=1 00:11:42.910 --rc geninfo_all_blocks=1 00:11:42.910 --rc geninfo_unexecuted_blocks=1 00:11:42.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.910 ' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.910 --rc genhtml_branch_coverage=1 00:11:42.910 --rc genhtml_function_coverage=1 00:11:42.910 --rc genhtml_legend=1 00:11:42.910 --rc geninfo_all_blocks=1 00:11:42.910 --rc geninfo_unexecuted_blocks=1 00:11:42.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.910 ' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.910 --rc genhtml_branch_coverage=1 00:11:42.910 --rc genhtml_function_coverage=1 00:11:42.910 --rc genhtml_legend=1 00:11:42.910 --rc geninfo_all_blocks=1 00:11:42.910 --rc geninfo_unexecuted_blocks=1 00:11:42.910 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:42.910 ' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:42.910 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:11:42.911 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:42.911 #define SPDK_CONFIG_H 00:11:42.911 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:42.911 #define SPDK_CONFIG_APPS 1 00:11:42.911 #define SPDK_CONFIG_ARCH native 00:11:42.911 #undef SPDK_CONFIG_ASAN 00:11:42.911 #undef SPDK_CONFIG_AVAHI 00:11:42.911 #undef SPDK_CONFIG_CET 00:11:42.911 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:42.911 #define SPDK_CONFIG_COVERAGE 1 00:11:42.911 #define SPDK_CONFIG_CROSS_PREFIX 00:11:42.911 #undef SPDK_CONFIG_CRYPTO 00:11:42.911 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:42.911 #undef SPDK_CONFIG_CUSTOMOCF 00:11:42.911 #undef SPDK_CONFIG_DAOS 00:11:42.911 #define SPDK_CONFIG_DAOS_DIR 00:11:42.911 #define SPDK_CONFIG_DEBUG 1 00:11:42.911 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:42.911 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:11:42.911 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:42.911 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:42.911 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:42.911 #undef SPDK_CONFIG_DPDK_UADK 00:11:42.911 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:11:42.911 #define SPDK_CONFIG_EXAMPLES 1 00:11:42.911 #undef SPDK_CONFIG_FC 00:11:42.911 #define SPDK_CONFIG_FC_PATH 00:11:42.911 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:42.911 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:42.911 #define SPDK_CONFIG_FSDEV 1 00:11:42.911 #undef SPDK_CONFIG_FUSE 00:11:42.911 #define SPDK_CONFIG_FUZZER 1 00:11:42.911 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:11:42.911 #undef SPDK_CONFIG_GOLANG 00:11:42.911 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:42.911 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:42.911 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:42.911 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:42.911 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:42.911 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:42.911 #undef SPDK_CONFIG_HAVE_LZ4 00:11:42.911 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:42.911 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:42.911 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:42.911 #define SPDK_CONFIG_IDXD 1 00:11:42.911 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:42.911 #undef SPDK_CONFIG_IPSEC_MB 00:11:42.911 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:42.911 #define SPDK_CONFIG_ISAL 1 00:11:42.911 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:42.911 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:42.911 #define SPDK_CONFIG_LIBDIR 00:11:42.911 #undef SPDK_CONFIG_LTO 00:11:42.911 #define SPDK_CONFIG_MAX_LCORES 128 00:11:42.911 #define SPDK_CONFIG_NVME_CUSE 1 00:11:42.911 #undef SPDK_CONFIG_OCF 00:11:42.911 #define SPDK_CONFIG_OCF_PATH 00:11:42.911 #define SPDK_CONFIG_OPENSSL_PATH 00:11:42.911 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:42.911 #define SPDK_CONFIG_PGO_DIR 00:11:42.911 #undef SPDK_CONFIG_PGO_USE 00:11:42.911 #define SPDK_CONFIG_PREFIX /usr/local 00:11:42.911 #undef SPDK_CONFIG_RAID5F 00:11:42.911 #undef SPDK_CONFIG_RBD 00:11:42.911 #define SPDK_CONFIG_RDMA 1 00:11:42.911 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:42.911 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:42.911 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:42.911 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:42.911 #undef SPDK_CONFIG_SHARED 00:11:42.911 #undef SPDK_CONFIG_SMA 00:11:42.911 #define SPDK_CONFIG_TESTS 1 00:11:42.911 #undef SPDK_CONFIG_TSAN 00:11:42.911 #define SPDK_CONFIG_UBLK 1 00:11:42.911 #define SPDK_CONFIG_UBSAN 1 00:11:42.911 #undef SPDK_CONFIG_UNIT_TESTS 00:11:42.911 #undef SPDK_CONFIG_URING 00:11:42.911 #define SPDK_CONFIG_URING_PATH 00:11:42.911 #undef SPDK_CONFIG_URING_ZNS 00:11:42.911 #undef SPDK_CONFIG_USDT 00:11:42.912 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:42.912 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:42.912 #define SPDK_CONFIG_VFIO_USER 1 00:11:42.912 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:42.912 #define SPDK_CONFIG_VHOST 1 00:11:42.912 #define SPDK_CONFIG_VIRTIO 1 00:11:42.912 #undef SPDK_CONFIG_VTUNE 00:11:42.912 #define SPDK_CONFIG_VTUNE_DIR 00:11:42.912 #define SPDK_CONFIG_WERROR 1 00:11:42.912 #define SPDK_CONFIG_WPDK_DIR 00:11:42.912 #undef SPDK_CONFIG_XNVME 00:11:42.912 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.912 17:28:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:11:43.174 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.175 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 2103396 ]] 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 2103396 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.BGBILk 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.BGBILk/tests/nvmf /tmp/spdk.BGBILk 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86854463488 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500372480 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7645908992 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245422592 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894340096 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900074496 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5734400 00:11:43.176 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249846272 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=339968 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450024960 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450037248 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:43.177 * Looking for test storage... 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86854463488 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=9860501504 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:43.177 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.177 --rc genhtml_branch_coverage=1 00:11:43.177 --rc genhtml_function_coverage=1 00:11:43.177 --rc genhtml_legend=1 00:11:43.177 --rc geninfo_all_blocks=1 00:11:43.177 --rc geninfo_unexecuted_blocks=1 00:11:43.177 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:43.177 ' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.177 --rc genhtml_branch_coverage=1 00:11:43.177 --rc genhtml_function_coverage=1 00:11:43.177 --rc genhtml_legend=1 00:11:43.177 --rc geninfo_all_blocks=1 00:11:43.177 --rc geninfo_unexecuted_blocks=1 00:11:43.177 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:43.177 ' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.177 --rc genhtml_branch_coverage=1 00:11:43.177 --rc genhtml_function_coverage=1 00:11:43.177 --rc genhtml_legend=1 00:11:43.177 --rc geninfo_all_blocks=1 00:11:43.177 --rc geninfo_unexecuted_blocks=1 00:11:43.177 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:43.177 ' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.177 --rc genhtml_branch_coverage=1 00:11:43.177 --rc genhtml_function_coverage=1 00:11:43.177 --rc genhtml_legend=1 00:11:43.177 --rc geninfo_all_blocks=1 00:11:43.177 --rc geninfo_unexecuted_blocks=1 00:11:43.177 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:43.177 ' 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:11:43.177 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:11:43.178 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:11:43.178 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:11:43.178 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:43.178 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:43.178 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:43.178 17:28:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:11:43.437 [2024-10-14 17:28:40.275289] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:43.437 [2024-10-14 17:28:40.275358] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103464 ] 00:11:43.437 [2024-10-14 17:28:40.471014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.437 [2024-10-14 17:28:40.509839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.695 [2024-10-14 17:28:40.569363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.695 [2024-10-14 17:28:40.585500] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:11:43.695 INFO: Running with entropic power schedule (0xFF, 100). 00:11:43.695 INFO: Seed: 1605063115 00:11:43.695 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:43.695 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:43.695 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:11:43.695 INFO: A corpus is not provided, starting from an empty corpus 00:11:43.695 #2 INITED exec/s: 0 rss: 66Mb 00:11:43.695 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:43.695 This may also happen if the target rejected all inputs we tried so far 00:11:43.695 [2024-10-14 17:28:40.641057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:43.695 [2024-10-14 17:28:40.641088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.954 NEW_FUNC[1/713]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:11:43.954 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:43.954 #5 NEW cov: 12149 ft: 12147 corp: 2/86b lim: 320 exec/s: 0 rss: 74Mb L: 85/85 MS: 3 ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:11:43.954 [2024-10-14 17:28:40.982038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:43.954 [2024-10-14 17:28:40.982095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:43.954 #6 NEW cov: 12262 ft: 12818 corp: 3/171b lim: 320 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 ShuffleBytes- 00:11:44.214 [2024-10-14 17:28:41.051999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.214 [2024-10-14 17:28:41.052033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.214 #12 NEW cov: 12268 ft: 13151 corp: 4/256b lim: 320 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 ShuffleBytes- 00:11:44.214 [2024-10-14 17:28:41.092074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c62bc6c6 00:11:44.214 [2024-10-14 17:28:41.092101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.214 #18 NEW cov: 12353 ft: 13434 corp: 5/342b lim: 320 exec/s: 0 rss: 74Mb L: 86/86 MS: 1 InsertByte- 00:11:44.214 [2024-10-14 17:28:41.152293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (16) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:44.214 [2024-10-14 17:28:41.152320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.214 NEW_FUNC[1/1]: 0x1937b18 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:11:44.214 #20 NEW cov: 12374 ft: 13673 corp: 6/427b lim: 320 exec/s: 0 rss: 74Mb L: 85/86 MS: 2 ChangeByte-InsertRepeatedBytes- 00:11:44.214 [2024-10-14 17:28:41.192364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.214 [2024-10-14 17:28:41.192391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.214 #21 NEW cov: 12374 ft: 13775 corp: 7/512b lim: 320 exec/s: 0 rss: 74Mb L: 85/86 MS: 1 CMP- DE: "\376\377\377\377"- 00:11:44.214 [2024-10-14 17:28:41.252560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c4c6c6c6 00:11:44.214 [2024-10-14 17:28:41.252586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.214 #22 NEW cov: 12374 ft: 13827 corp: 8/597b lim: 320 exec/s: 0 rss: 74Mb L: 85/86 MS: 1 ChangeBit- 00:11:44.214 [2024-10-14 17:28:41.292676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.214 [2024-10-14 17:28:41.292702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.473 #24 NEW cov: 12391 ft: 13929 corp: 9/683b lim: 320 exec/s: 0 rss: 74Mb L: 86/86 MS: 2 ShuffleBytes-CrossOver- 00:11:44.473 [2024-10-14 17:28:41.332778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.473 [2024-10-14 17:28:41.332806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.473 #25 NEW cov: 12391 ft: 13977 corp: 10/757b lim: 320 exec/s: 0 rss: 74Mb L: 74/86 MS: 1 EraseBytes- 00:11:44.473 [2024-10-14 17:28:41.393081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (16) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:44.473 [2024-10-14 17:28:41.393107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.473 [2024-10-14 17:28:41.393166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:44.473 [2024-10-14 17:28:41.393181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.473 NEW_FUNC[1/1]: 0x14ff778 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:11:44.473 #31 NEW cov: 12422 ft: 14219 corp: 11/920b lim: 320 exec/s: 0 rss: 74Mb L: 163/163 MS: 1 CopyPart- 00:11:44.473 [2024-10-14 17:28:41.463190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.473 [2024-10-14 17:28:41.463218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.473 #32 NEW cov: 12422 ft: 14253 corp: 12/1006b lim: 320 exec/s: 0 rss: 74Mb L: 86/163 MS: 1 ChangeBinInt- 00:11:44.473 [2024-10-14 17:28:41.523490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:d1d1d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd1d1d1d1d1d1d1d1 00:11:44.473 [2024-10-14 17:28:41.523518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.473 [2024-10-14 17:28:41.523580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d1) qid:0 cid:5 nsid:d1d1d1d1 cdw10:c6c6c6c6 cdw11:c6c6c6c6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:44.473 [2024-10-14 17:28:41.523595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.732 NEW_FUNC[1/2]: 0x1938688 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:11:44.732 NEW_FUNC[2/2]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:44.732 #33 NEW cov: 12458 ft: 14728 corp: 13/1163b lim: 320 exec/s: 0 rss: 74Mb L: 157/163 MS: 1 InsertRepeatedBytes- 00:11:44.732 [2024-10-14 17:28:41.583491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.732 [2024-10-14 17:28:41.583518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.732 #34 NEW cov: 12458 ft: 14774 corp: 14/1248b lim: 320 exec/s: 0 rss: 74Mb L: 85/163 MS: 1 ShuffleBytes- 00:11:44.732 [2024-10-14 17:28:41.623583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.732 [2024-10-14 17:28:41.623611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.732 #35 NEW cov: 12458 ft: 14789 corp: 15/1337b lim: 320 exec/s: 35 rss: 74Mb L: 89/163 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:11:44.732 [2024-10-14 17:28:41.663697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.732 [2024-10-14 17:28:41.663723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.732 #36 NEW cov: 12458 ft: 14805 corp: 16/1405b lim: 320 exec/s: 36 rss: 74Mb L: 68/163 MS: 1 EraseBytes- 00:11:44.732 [2024-10-14 17:28:41.703929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.732 [2024-10-14 17:28:41.703956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.732 [2024-10-14 17:28:41.704008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.732 [2024-10-14 17:28:41.704022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.732 #37 NEW cov: 12459 ft: 14821 corp: 17/1548b lim: 320 exec/s: 37 rss: 74Mb L: 143/163 MS: 1 CopyPart- 00:11:44.732 [2024-10-14 17:28:41.743943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.732 [2024-10-14 17:28:41.743970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.732 #38 NEW cov: 12459 ft: 14871 corp: 18/1633b lim: 320 exec/s: 38 rss: 74Mb L: 85/163 MS: 1 ChangeBit- 00:11:44.732 [2024-10-14 17:28:41.784059] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.732 [2024-10-14 17:28:41.784085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.732 #39 NEW cov: 12459 ft: 14882 corp: 19/1720b lim: 320 exec/s: 39 rss: 74Mb L: 87/163 MS: 1 InsertByte- 00:11:44.991 [2024-10-14 17:28:41.824703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:d1d1d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd1d1d1d1d1d1d1d1 00:11:44.991 [2024-10-14 17:28:41.824734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.991 [2024-10-14 17:28:41.824797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d1) qid:0 cid:5 nsid:d1d1d1d1 cdw10:d1d1d100 cdw11:d1d1d1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:44.991 [2024-10-14 17:28:41.824812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.991 [2024-10-14 17:28:41.824872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d1) qid:0 cid:6 nsid:d1d1d1d1 cdw10:d1d1d1d1 cdw11:d1d1d1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:44.991 [2024-10-14 17:28:41.824887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:44.991 [2024-10-14 17:28:41.824947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c6) qid:0 cid:7 nsid:c6c6c6c6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc6c6c6c6c6c6c6c6 00:11:44.992 [2024-10-14 17:28:41.824961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:44.992 #40 NEW cov: 12459 ft: 15541 corp: 20/2016b lim: 320 exec/s: 40 rss: 74Mb L: 296/296 MS: 1 CopyPart- 00:11:44.992 [2024-10-14 17:28:41.884373] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.992 [2024-10-14 17:28:41.884400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.992 #46 NEW cov: 12459 ft: 15570 corp: 21/2103b lim: 320 exec/s: 46 rss: 74Mb L: 87/296 MS: 1 CopyPart- 00:11:44.992 [2024-10-14 17:28:41.944499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.992 [2024-10-14 17:28:41.944525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.992 #47 NEW cov: 12459 ft: 15581 corp: 22/2192b lim: 320 exec/s: 47 rss: 74Mb L: 89/296 MS: 1 ChangeByte- 00:11:44.992 [2024-10-14 17:28:42.004838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:00000000 cdw11:00000000 00:11:44.992 [2024-10-14 17:28:42.004864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.992 [2024-10-14 17:28:42.004928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c6) qid:0 cid:5 nsid:c6c6c6c6 cdw10:c6c6c6c6 cdw11:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc6c6c6c6c6c6c6c6 00:11:44.992 [2024-10-14 17:28:42.004944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.992 #48 NEW cov: 12459 ft: 15648 corp: 23/2333b lim: 320 exec/s: 48 rss: 74Mb L: 141/296 MS: 1 CopyPart- 00:11:44.992 [2024-10-14 17:28:42.044906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:44.992 [2024-10-14 17:28:42.044933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:44.992 [2024-10-14 17:28:42.044994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c6) qid:0 cid:5 nsid:c6c6c6c6 cdw10:00008000 cdw11:c6c60000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:44.992 [2024-10-14 17:28:42.045009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:44.992 #49 NEW cov: 12459 ft: 15671 corp: 24/2503b lim: 320 exec/s: 49 rss: 74Mb L: 170/296 MS: 1 CrossOver- 00:11:45.251 [2024-10-14 17:28:42.085076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:26262600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:45.251 [2024-10-14 17:28:42.085104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.251 [2024-10-14 17:28:42.085171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (26) qid:0 cid:5 nsid:26262626 cdw10:26262626 cdw11:26262626 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2626262626262626 00:11:45.251 [2024-10-14 17:28:42.085186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.251 #51 NEW cov: 12459 ft: 15684 corp: 25/2664b lim: 320 exec/s: 51 rss: 74Mb L: 161/296 MS: 2 CrossOver-InsertRepeatedBytes- 00:11:45.251 [2024-10-14 17:28:42.125036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:45.251 [2024-10-14 17:28:42.125063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.251 #52 NEW cov: 12459 ft: 15717 corp: 26/2749b lim: 320 exec/s: 52 rss: 74Mb L: 85/296 MS: 1 CopyPart- 00:11:45.251 [2024-10-14 17:28:42.185217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c62bc6c6 00:11:45.251 [2024-10-14 17:28:42.185245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.251 #53 NEW cov: 12459 ft: 15739 corp: 27/2835b lim: 320 exec/s: 53 rss: 75Mb L: 86/296 MS: 1 ChangeBit- 00:11:45.251 [2024-10-14 17:28:42.245375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:45.251 [2024-10-14 17:28:42.245402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.251 #54 NEW cov: 12459 ft: 15753 corp: 28/2921b lim: 320 exec/s: 54 rss: 75Mb L: 86/296 MS: 1 InsertByte- 00:11:45.251 [2024-10-14 17:28:42.285494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c646c6 cdw11:c62bc6c6 00:11:45.251 [2024-10-14 17:28:42.285522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.251 #55 NEW cov: 12459 ft: 15767 corp: 29/3007b lim: 320 exec/s: 55 rss: 75Mb L: 86/296 MS: 1 ChangeBit- 00:11:45.251 [2024-10-14 17:28:42.325551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:fffffe00 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:45.251 [2024-10-14 17:28:42.325579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.510 #56 NEW cov: 12459 ft: 15788 corp: 30/3092b lim: 320 exec/s: 56 rss: 75Mb L: 85/296 MS: 1 ChangeBit- 00:11:45.510 [2024-10-14 17:28:42.365703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (16) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:45.510 [2024-10-14 17:28:42.365730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.510 #57 NEW cov: 12459 ft: 15804 corp: 31/3177b lim: 320 exec/s: 57 rss: 75Mb L: 85/296 MS: 1 ShuffleBytes- 00:11:45.510 [2024-10-14 17:28:42.405831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (16) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:45.510 [2024-10-14 17:28:42.405858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.510 #58 NEW cov: 12459 ft: 15829 corp: 32/3262b lim: 320 exec/s: 58 rss: 75Mb L: 85/296 MS: 1 ShuffleBytes- 00:11:45.510 [2024-10-14 17:28:42.445952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (16) qid:0 cid:4 nsid:fffffeff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:45.510 [2024-10-14 17:28:42.445978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.510 #59 NEW cov: 12459 ft: 15838 corp: 33/3351b lim: 320 exec/s: 59 rss: 75Mb L: 89/296 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:11:45.510 [2024-10-14 17:28:42.506117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (30) qid:0 cid:4 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6c6 00:11:45.510 [2024-10-14 17:28:42.506145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.510 #60 NEW cov: 12459 ft: 15840 corp: 34/3436b lim: 320 exec/s: 60 rss: 75Mb L: 85/296 MS: 1 ChangeBinInt- 00:11:45.510 [2024-10-14 17:28:42.546250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (16) qid:0 cid:4 nsid:fffffeff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:11:45.510 [2024-10-14 17:28:42.546279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.510 #61 NEW cov: 12459 ft: 15845 corp: 35/3511b lim: 320 exec/s: 61 rss: 75Mb L: 75/296 MS: 1 EraseBytes- 00:11:45.770 [2024-10-14 17:28:42.606883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:d1d1d1d1 SGL TRANSPORT DATA BLOCK TRANSPORT 0xd1d1d1d1d1d1d1d1 00:11:45.770 [2024-10-14 17:28:42.606912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:45.770 [2024-10-14 17:28:42.606975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d1) qid:0 cid:5 nsid:d1d1d1d1 cdw10:d1d1d100 cdw11:d1d1d1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.770 [2024-10-14 17:28:42.606989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:45.770 [2024-10-14 17:28:42.607048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (d1) qid:0 cid:6 nsid:d1d1d1d1 cdw10:d1d1d1d1 cdw11:d1d1d1d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:45.770 [2024-10-14 17:28:42.607063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:45.770 [2024-10-14 17:28:42.607127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c6) qid:0 cid:7 nsid:c6c6c6c6 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc6c6c6c6c6c6c6c6 00:11:45.770 [2024-10-14 17:28:42.607141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:45.770 #62 NEW cov: 12459 ft: 15886 corp: 36/3807b lim: 320 exec/s: 31 rss: 75Mb L: 296/296 MS: 1 ChangeBinInt- 00:11:45.770 #62 DONE cov: 12459 ft: 15886 corp: 36/3807b lim: 320 exec/s: 31 rss: 75Mb 00:11:45.770 ###### Recommended dictionary. ###### 00:11:45.770 "\376\377\377\377" # Uses: 2 00:11:45.770 ###### End of recommended dictionary. ###### 00:11:45.770 Done 62 runs in 2 second(s) 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:45.770 17:28:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:11:45.770 [2024-10-14 17:28:42.801171] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:45.770 [2024-10-14 17:28:42.801258] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103819 ] 00:11:46.029 [2024-10-14 17:28:42.991071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.029 [2024-10-14 17:28:43.029308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.029 [2024-10-14 17:28:43.088215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.029 [2024-10-14 17:28:43.104371] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:11:46.029 INFO: Running with entropic power schedule (0xFF, 100). 00:11:46.029 INFO: Seed: 4125060356 00:11:46.289 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:46.289 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:46.289 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:11:46.289 INFO: A corpus is not provided, starting from an empty corpus 00:11:46.289 #2 INITED exec/s: 0 rss: 66Mb 00:11:46.289 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:46.289 This may also happen if the target rejected all inputs we tried so far 00:11:46.289 [2024-10-14 17:28:43.159696] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x700a 00:11:46.289 [2024-10-14 17:28:43.159925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70bc0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.289 [2024-10-14 17:28:43.159957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.549 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:11:46.549 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:46.549 #9 NEW cov: 12247 ft: 12244 corp: 2/7b lim: 30 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 InsertRepeatedBytes-InsertByte- 00:11:46.549 [2024-10-14 17:28:43.500684] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.500815] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.500926] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.501038] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.501146] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:46.549 [2024-10-14 17:28:43.501378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.501421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.501488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.501513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.501577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.501597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.501660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.501680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.501743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.501762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.549 #11 NEW cov: 12360 ft: 13494 corp: 3/37b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 2 InsertByte-InsertRepeatedBytes- 00:11:46.549 [2024-10-14 17:28:43.550639] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.550756] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.550862] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.550968] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.551081] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.549 [2024-10-14 17:28:43.551310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.551339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.551391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.551406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.551459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.551473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.551524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.551538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.549 [2024-10-14 17:28:43.551590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.549 [2024-10-14 17:28:43.551603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.549 #13 NEW cov: 12366 ft: 13671 corp: 4/67b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 2 EraseBytes-CrossOver- 00:11:46.549 [2024-10-14 17:28:43.610812] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.550 [2024-10-14 17:28:43.610926] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.550 [2024-10-14 17:28:43.611036] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.550 [2024-10-14 17:28:43.611142] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bfff 00:11:46.550 [2024-10-14 17:28:43.611248] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:46.550 [2024-10-14 17:28:43.611458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.550 [2024-10-14 17:28:43.611485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.550 [2024-10-14 17:28:43.611538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.550 [2024-10-14 17:28:43.611553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.550 [2024-10-14 17:28:43.611605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.550 [2024-10-14 17:28:43.611619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.550 [2024-10-14 17:28:43.611670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.550 [2024-10-14 17:28:43.611684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.550 [2024-10-14 17:28:43.611736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.550 [2024-10-14 17:28:43.611750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.809 #14 NEW cov: 12451 ft: 13892 corp: 5/97b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeBit- 00:11:46.809 [2024-10-14 17:28:43.670939] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.671062] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.671168] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.671266] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.671370] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:46.809 [2024-10-14 17:28:43.671570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.671596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.671648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.671662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.671714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.671727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.671777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.671791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.671844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.671861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.809 #15 NEW cov: 12451 ft: 13970 corp: 6/127b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:46.809 [2024-10-14 17:28:43.711108] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.711221] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.711327] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.711430] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.711533] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.711740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.711767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.711821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.711836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.711890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.711905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.711954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.711969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.809 [2024-10-14 17:28:43.712019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.809 [2024-10-14 17:28:43.712039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.809 #16 NEW cov: 12451 ft: 14052 corp: 7/157b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:11:46.809 [2024-10-14 17:28:43.771242] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.771356] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.771462] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.809 [2024-10-14 17:28:43.771564] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.771669] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.771871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.771898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.771954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.771970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.772020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.772042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.772096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.772110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.772165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.772179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.810 #17 NEW cov: 12451 ft: 14124 corp: 8/187b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeBit- 00:11:46.810 [2024-10-14 17:28:43.831409] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.831527] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.831631] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.831732] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:46.810 [2024-10-14 17:28:43.831834] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.832038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.832080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.832134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.832148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.832203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.832216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.832268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff018342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.832282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.832335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.832349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:46.810 #18 NEW cov: 12451 ft: 14143 corp: 9/217b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:46.810 [2024-10-14 17:28:43.891585] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.891704] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006cff 00:11:46.810 [2024-10-14 17:28:43.891812] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.891919] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.892033] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:46.810 [2024-10-14 17:28:43.892239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.892268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.892321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.892336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.892389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.892403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.892455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.892469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:46.810 [2024-10-14 17:28:43.892520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.810 [2024-10-14 17:28:43.892533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.070 #19 NEW cov: 12451 ft: 14193 corp: 10/247b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:11:47.070 [2024-10-14 17:28:43.931652] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.931769] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.931874] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.931975] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.932082] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fbff 00:11:47.070 [2024-10-14 17:28:43.932282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.932309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.932361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.932376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.932428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.932442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.932492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.932506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.932558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.932572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.070 #20 NEW cov: 12451 ft: 14250 corp: 11/277b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeBit- 00:11:47.070 [2024-10-14 17:28:43.971766] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.971880] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.971985] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000042ff 00:11:47.070 [2024-10-14 17:28:43.972093] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:47.070 [2024-10-14 17:28:43.972200] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:43.972401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.972427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.972481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.972497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.972548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.972562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.972614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:f6018342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.972627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:43.972679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:43.972693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.070 #21 NEW cov: 12451 ft: 14311 corp: 12/307b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:11:47.070 [2024-10-14 17:28:44.031944] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.032063] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.032171] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.032278] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:47.070 [2024-10-14 17:28:44.032382] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.032601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.032627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.032680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.032695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.032748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.032763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.032815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff288342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.032833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.032883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.032897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.070 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:47.070 #22 NEW cov: 12474 ft: 14393 corp: 13/337b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:11:47.070 [2024-10-14 17:28:44.071991] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff01 00:11:47.070 [2024-10-14 17:28:44.072111] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.072319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.072346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.072399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:42ff83f6 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.072414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.070 #23 NEW cov: 12474 ft: 14693 corp: 14/353b lim: 30 exec/s: 0 rss: 75Mb L: 16/30 MS: 1 EraseBytes- 00:11:47.070 [2024-10-14 17:28:44.112143] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.112255] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.112356] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.112458] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.112557] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.070 [2024-10-14 17:28:44.112759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.112785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.112838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.112852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.112905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.112920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.112971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.070 [2024-10-14 17:28:44.112985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.070 [2024-10-14 17:28:44.113043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.071 [2024-10-14 17:28:44.113058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.071 #24 NEW cov: 12474 ft: 14706 corp: 15/383b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 CrossOver- 00:11:47.071 [2024-10-14 17:28:44.152259] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.071 [2024-10-14 17:28:44.152373] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.071 [2024-10-14 17:28:44.152477] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.071 [2024-10-14 17:28:44.152581] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.071 [2024-10-14 17:28:44.152681] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.071 [2024-10-14 17:28:44.152880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.071 [2024-10-14 17:28:44.152906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.071 [2024-10-14 17:28:44.152961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.071 [2024-10-14 17:28:44.152976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.071 [2024-10-14 17:28:44.153032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.071 [2024-10-14 17:28:44.153046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.071 [2024-10-14 17:28:44.153097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.071 [2024-10-14 17:28:44.153112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.071 [2024-10-14 17:28:44.153164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.071 [2024-10-14 17:28:44.153178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.330 #25 NEW cov: 12474 ft: 14730 corp: 16/413b lim: 30 exec/s: 25 rss: 75Mb L: 30/30 MS: 1 CrossOver- 00:11:47.330 [2024-10-14 17:28:44.192382] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.192495] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.192600] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.192703] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.192809] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.193013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:25ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.193045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.193099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.193113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.193163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.193178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.193233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.193247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.193297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.193311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.330 #26 NEW cov: 12474 ft: 14801 corp: 17/443b lim: 30 exec/s: 26 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:11:47.330 [2024-10-14 17:28:44.252610] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.252724] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.252824] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.252926] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.330 [2024-10-14 17:28:44.253037] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:47.330 [2024-10-14 17:28:44.253245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.253271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.253323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.253338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.253392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.253406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.253458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.330 [2024-10-14 17:28:44.253472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.330 [2024-10-14 17:28:44.253526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.253540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.331 #27 NEW cov: 12474 ft: 14855 corp: 18/473b lim: 30 exec/s: 27 rss: 75Mb L: 30/30 MS: 1 ShuffleBytes- 00:11:47.331 [2024-10-14 17:28:44.312756] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.312872] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.312979] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.313094] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.313199] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.313409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.313434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.313490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.313505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.313558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.313571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.313621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83bc cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.313635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.313685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:1eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.313699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.331 #28 NEW cov: 12474 ft: 14907 corp: 19/503b lim: 30 exec/s: 28 rss: 75Mb L: 30/30 MS: 1 CMP- DE: "\377\036"- 00:11:47.331 [2024-10-14 17:28:44.352806] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.352920] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.353023] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.353138] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:11:47.331 [2024-10-14 17:28:44.353248] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:47.331 [2024-10-14 17:28:44.353456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.353482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.353535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.353550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.353602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.353617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.353668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.353682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.353731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.353745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.331 #29 NEW cov: 12497 ft: 14985 corp: 20/533b lim: 30 exec/s: 29 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:47.331 [2024-10-14 17:28:44.392961] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.393086] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.393200] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.393307] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.393414] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.331 [2024-10-14 17:28:44.393628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.393653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.393707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.393722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.393774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.393788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.393840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.393855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.331 [2024-10-14 17:28:44.393906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.331 [2024-10-14 17:28:44.393920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.592 #30 NEW cov: 12497 ft: 15006 corp: 21/563b lim: 30 exec/s: 30 rss: 75Mb L: 30/30 MS: 1 CopyPart- 00:11:47.592 [2024-10-14 17:28:44.453126] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.453242] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.453348] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.453452] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bfff 00:11:47.592 [2024-10-14 17:28:44.453557] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:47.592 [2024-10-14 17:28:44.453767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.453793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.453846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.453861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.453913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.453928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.453981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.453995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.454052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ff1e83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.454066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.592 #31 NEW cov: 12497 ft: 15037 corp: 22/593b lim: 30 exec/s: 31 rss: 75Mb L: 30/30 MS: 1 PersAutoDict- DE: "\377\036"- 00:11:47.592 [2024-10-14 17:28:44.513329] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.513443] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.513552] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000042ff 00:11:47.592 [2024-10-14 17:28:44.513654] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:47.592 [2024-10-14 17:28:44.513759] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.513967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.513993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.514046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.514062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.514113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.514127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.514177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:fe018342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.514192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.514244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.514259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.592 #32 NEW cov: 12497 ft: 15090 corp: 23/623b lim: 30 exec/s: 32 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:47.592 [2024-10-14 17:28:44.573481] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.573595] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.573935] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.574057] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:47.592 [2024-10-14 17:28:44.574166] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.574373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.574399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.574453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:fffd83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.574468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.574525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.574540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.574593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff018342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.574607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.574661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.574676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.592 #33 NEW cov: 12497 ft: 15189 corp: 24/653b lim: 30 exec/s: 33 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:47.592 [2024-10-14 17:28:44.613556] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.613670] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.613773] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.613878] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.613980] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:47.592 [2024-10-14 17:28:44.614195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.614222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.614279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff5c83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.614294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.614346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.614360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.614412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.614427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.614482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.614497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.592 #34 NEW cov: 12497 ft: 15203 corp: 25/683b lim: 30 exec/s: 34 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:11:47.592 [2024-10-14 17:28:44.653660] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.653774] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.653878] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.592 [2024-10-14 17:28:44.653981] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:11:47.592 [2024-10-14 17:28:44.654091] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:47.592 [2024-10-14 17:28:44.654306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.654332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.654387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1eff831e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.654401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.654454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.654468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.654521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.654535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.592 [2024-10-14 17:28:44.654584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.592 [2024-10-14 17:28:44.654598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.853 #35 NEW cov: 12497 ft: 15210 corp: 26/713b lim: 30 exec/s: 35 rss: 75Mb L: 30/30 MS: 1 PersAutoDict- DE: "\377\036"- 00:11:47.853 [2024-10-14 17:28:44.713880] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.713996] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.714127] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.714237] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:11:47.853 [2024-10-14 17:28:44.714347] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:47.853 [2024-10-14 17:28:44.714560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.714587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.714641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:1eff831e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.714656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.714711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.714726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.714777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.714791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.714845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:fffa83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.714860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.853 #36 NEW cov: 12497 ft: 15238 corp: 27/743b lim: 30 exec/s: 36 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:47.853 [2024-10-14 17:28:44.774016] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.774137] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.774245] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:11:47.853 [2024-10-14 17:28:44.774351] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:47.853 [2024-10-14 17:28:44.774460] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.774667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.774693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.774746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.774761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.774813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff000e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.774826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.774877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff018342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.774891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.774940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.774953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.853 #37 NEW cov: 12497 ft: 15286 corp: 28/773b lim: 30 exec/s: 37 rss: 75Mb L: 30/30 MS: 1 CMP- DE: "\016\000"- 00:11:47.853 [2024-10-14 17:28:44.814121] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:11:47.853 [2024-10-14 17:28:44.814238] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.814343] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.814446] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.814550] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.814758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.814785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.814841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.814856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.814913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.814928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.814985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.814999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.815053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.815067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.853 #38 NEW cov: 12497 ft: 15361 corp: 29/803b lim: 30 exec/s: 38 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:47.853 [2024-10-14 17:28:44.854229] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:11:47.853 [2024-10-14 17:28:44.854348] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.854458] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.854569] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.854676] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.854885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.854911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.854965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.854980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.855034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.855049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.855102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.855116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.855170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.855185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:47.853 #39 NEW cov: 12497 ft: 15365 corp: 30/833b lim: 30 exec/s: 39 rss: 75Mb L: 30/30 MS: 1 ChangeBit- 00:11:47.853 [2024-10-14 17:28:44.914356] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.914475] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000fff6 00:11:47.853 [2024-10-14 17:28:44.914581] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:47.853 [2024-10-14 17:28:44.914778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.914805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.914861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.914880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:47.853 [2024-10-14 17:28:44.914934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.853 [2024-10-14 17:28:44.914948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:47.854 #40 NEW cov: 12497 ft: 15601 corp: 31/852b lim: 30 exec/s: 40 rss: 75Mb L: 19/30 MS: 1 EraseBytes- 00:11:48.113 [2024-10-14 17:28:44.954507] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:44.954622] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:44.954729] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:11:48.113 [2024-10-14 17:28:44.954833] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xbfff 00:11:48.113 [2024-10-14 17:28:44.954938] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003f0a 00:11:48.113 [2024-10-14 17:28:44.955160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.955186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.955240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.955256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.955308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0011 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.955322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.955372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.955387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.955439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.955453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:48.113 #41 NEW cov: 12497 ft: 15612 corp: 32/882b lim: 30 exec/s: 41 rss: 75Mb L: 30/30 MS: 1 CMP- DE: "\021\000\000\000\000\000\000\000"- 00:11:48.113 [2024-10-14 17:28:44.994632] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:44.994747] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:44.994856] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:44.994962] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:44.995078] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000fbff 00:11:48.113 [2024-10-14 17:28:44.995283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:70ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.995310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.995364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.995382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.995435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.995450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.995499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.995514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:44.995566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:44.995580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:48.113 #42 NEW cov: 12497 ft: 15615 corp: 33/912b lim: 30 exec/s: 42 rss: 75Mb L: 30/30 MS: 1 CopyPart- 00:11:48.113 [2024-10-14 17:28:45.054711] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:11:48.113 [2024-10-14 17:28:45.054827] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001eff 00:11:48.113 [2024-10-14 17:28:45.055038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.055062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:45.055118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.055133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.113 #43 NEW cov: 12497 ft: 15626 corp: 34/929b lim: 30 exec/s: 43 rss: 75Mb L: 17/30 MS: 1 CrossOver- 00:11:48.113 [2024-10-14 17:28:45.094900] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfaff 00:11:48.113 [2024-10-14 17:28:45.095017] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:45.095129] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:45.095247] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f6ff 00:11:48.113 [2024-10-14 17:28:45.095353] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:11:48.113 [2024-10-14 17:28:45.095559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:71ff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.095586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:45.095639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.095653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:45.095704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.095718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:45.095771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ff288342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.095788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:48.113 [2024-10-14 17:28:45.095843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.113 [2024-10-14 17:28:45.095857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:48.113 #44 NEW cov: 12497 ft: 15633 corp: 35/959b lim: 30 exec/s: 44 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:11:48.113 [2024-10-14 17:28:45.154986] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:11:48.113 [2024-10-14 17:28:45.155111] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001eff 00:11:48.114 [2024-10-14 17:28:45.155315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.114 [2024-10-14 17:28:45.155340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.114 [2024-10-14 17:28:45.155393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.114 [2024-10-14 17:28:45.155408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.114 #45 NEW cov: 12497 ft: 15644 corp: 36/976b lim: 30 exec/s: 22 rss: 76Mb L: 17/30 MS: 1 CrossOver- 00:11:48.114 #45 DONE cov: 12497 ft: 15644 corp: 36/976b lim: 30 exec/s: 22 rss: 76Mb 00:11:48.114 ###### Recommended dictionary. ###### 00:11:48.114 "\377\036" # Uses: 2 00:11:48.114 "\016\000" # Uses: 0 00:11:48.114 "\021\000\000\000\000\000\000\000" # Uses: 0 00:11:48.114 ###### End of recommended dictionary. ###### 00:11:48.114 Done 45 runs in 2 second(s) 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:48.373 17:28:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:11:48.373 [2024-10-14 17:28:45.348836] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:48.373 [2024-10-14 17:28:45.348909] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104178 ] 00:11:48.632 [2024-10-14 17:28:45.541996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.632 [2024-10-14 17:28:45.581233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.632 [2024-10-14 17:28:45.640162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.632 [2024-10-14 17:28:45.656314] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:11:48.632 INFO: Running with entropic power schedule (0xFF, 100). 00:11:48.632 INFO: Seed: 2381107026 00:11:48.632 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:48.632 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:48.632 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:11:48.632 INFO: A corpus is not provided, starting from an empty corpus 00:11:48.632 #2 INITED exec/s: 0 rss: 66Mb 00:11:48.632 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:48.632 This may also happen if the target rejected all inputs we tried so far 00:11:48.632 [2024-10-14 17:28:45.712031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.632 [2024-10-14 17:28:45.712061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:48.632 [2024-10-14 17:28:45.712115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.632 [2024-10-14 17:28:45.712129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:48.632 [2024-10-14 17:28:45.712184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.632 [2024-10-14 17:28:45.712198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.150 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:11:49.150 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:49.150 #4 NEW cov: 12203 ft: 12202 corp: 2/26b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 2 ChangeByte-InsertRepeatedBytes- 00:11:49.150 [2024-10-14 17:28:46.052853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e700e7 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.150 [2024-10-14 17:28:46.052912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.150 #6 NEW cov: 12316 ft: 13152 corp: 3/39b lim: 35 exec/s: 0 rss: 74Mb L: 13/25 MS: 2 ChangeByte-InsertRepeatedBytes- 00:11:49.150 [2024-10-14 17:28:46.102868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e700e7 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.150 [2024-10-14 17:28:46.102896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.150 [2024-10-14 17:28:46.102952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e7e700e7 cdw11:03000303 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.150 [2024-10-14 17:28:46.102967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.150 #7 NEW cov: 12322 ft: 13604 corp: 4/57b lim: 35 exec/s: 0 rss: 74Mb L: 18/25 MS: 1 InsertRepeatedBytes- 00:11:49.150 [2024-10-14 17:28:46.162920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e700e7 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.150 [2024-10-14 17:28:46.162948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.150 #8 NEW cov: 12407 ft: 14026 corp: 5/70b lim: 35 exec/s: 0 rss: 74Mb L: 13/25 MS: 1 ShuffleBytes- 00:11:49.150 [2024-10-14 17:28:46.203142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e70032 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.150 [2024-10-14 17:28:46.203170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.150 [2024-10-14 17:28:46.203227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e7e700e7 cdw11:0300e703 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.150 [2024-10-14 17:28:46.203242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.410 #9 NEW cov: 12407 ft: 14146 corp: 6/89b lim: 35 exec/s: 0 rss: 74Mb L: 19/25 MS: 1 InsertByte- 00:11:49.410 [2024-10-14 17:28:46.263500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.263527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.263586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.263602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.263657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.263671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.410 #10 NEW cov: 12407 ft: 14247 corp: 7/114b lim: 35 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:11:49.410 [2024-10-14 17:28:46.323617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.323643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.323700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.323714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.323774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.323790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.410 #16 NEW cov: 12407 ft: 14352 corp: 8/140b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertByte- 00:11:49.410 [2024-10-14 17:28:46.383848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6868006d cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.383875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.383932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.383949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.410 #17 NEW cov: 12407 ft: 14713 corp: 9/165b lim: 35 exec/s: 0 rss: 74Mb L: 25/26 MS: 1 CMP- DE: "\001\000\177JL\016\317m"- 00:11:49.410 [2024-10-14 17:28:46.423906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.423932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.423991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.424005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.424062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.424077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.410 #18 NEW cov: 12407 ft: 14752 corp: 10/191b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 CopyPart- 00:11:49.410 [2024-10-14 17:28:46.484147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6868006d cdw11:6800e668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.484174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.410 [2024-10-14 17:28:46.484234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.410 [2024-10-14 17:28:46.484249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.670 #19 NEW cov: 12407 ft: 14817 corp: 11/217b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertByte- 00:11:49.670 [2024-10-14 17:28:46.544223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.544250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.544309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a5680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.544323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.544382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.544396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.670 #20 NEW cov: 12407 ft: 14923 corp: 12/243b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 ChangeByte- 00:11:49.670 [2024-10-14 17:28:46.584365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.584391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.584450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a5680068 cdw11:98006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.584465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.584521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.584539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.670 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:49.670 #21 NEW cov: 12430 ft: 14975 corp: 13/269b lim: 35 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 ChangeBinInt- 00:11:49.670 [2024-10-14 17:28:46.644777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6868006d cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.644803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.644864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.644878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.644938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.644954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:49.670 #22 NEW cov: 12430 ft: 15352 corp: 14/303b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 PersAutoDict- DE: "\001\000\177JL\016\317m"- 00:11:49.670 [2024-10-14 17:28:46.684812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6868006d cdw11:6800e668 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.684838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.684896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.684910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.670 #23 NEW cov: 12430 ft: 15445 corp: 15/329b lim: 35 exec/s: 23 rss: 74Mb L: 26/34 MS: 1 ChangeByte- 00:11:49.670 [2024-10-14 17:28:46.744852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.744878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.744937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.744952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.670 [2024-10-14 17:28:46.745010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.670 [2024-10-14 17:28:46.745025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.930 #24 NEW cov: 12430 ft: 15478 corp: 16/355b lim: 35 exec/s: 24 rss: 75Mb L: 26/34 MS: 1 ChangeBit- 00:11:49.930 [2024-10-14 17:28:46.805151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6868006d cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.805177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.805237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.805255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.930 #25 NEW cov: 12430 ft: 15539 corp: 17/380b lim: 35 exec/s: 25 rss: 75Mb L: 25/34 MS: 1 ChangeByte- 00:11:49.930 [2024-10-14 17:28:46.845132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.845159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.845218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.845234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.845293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68001b68 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.845307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.930 #26 NEW cov: 12430 ft: 15557 corp: 18/405b lim: 35 exec/s: 26 rss: 75Mb L: 25/34 MS: 1 CopyPart- 00:11:49.930 [2024-10-14 17:28:46.885406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68010068 cdw11:4a00007f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.885433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.885493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cf6d000e cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.885508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.885564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68a50068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.885578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.885638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:6868008f cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.885652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:49.930 #27 NEW cov: 12430 ft: 15710 corp: 19/439b lim: 35 exec/s: 27 rss: 75Mb L: 34/34 MS: 1 PersAutoDict- DE: "\001\000\177JL\016\317m"- 00:11:49.930 [2024-10-14 17:28:46.945438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.945464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.945522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.945537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.945594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:6800684a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.945609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.930 #28 NEW cov: 12430 ft: 15727 corp: 20/462b lim: 35 exec/s: 28 rss: 75Mb L: 23/34 MS: 1 EraseBytes- 00:11:49.930 [2024-10-14 17:28:46.985692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68010068 cdw11:4a0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.985720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.985779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:cf6d000e cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.985794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.985850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68a50068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.985865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:49.930 [2024-10-14 17:28:46.985919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:6868008f cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:49.930 [2024-10-14 17:28:46.985934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:50.189 #29 NEW cov: 12430 ft: 15741 corp: 21/496b lim: 35 exec/s: 29 rss: 75Mb L: 34/34 MS: 1 ChangeBit- 00:11:50.189 [2024-10-14 17:28:47.045590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e70032 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.189 [2024-10-14 17:28:47.045617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.189 [2024-10-14 17:28:47.045675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e7e700e7 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.189 [2024-10-14 17:28:47.045690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.189 #30 NEW cov: 12430 ft: 15776 corp: 22/515b lim: 35 exec/s: 30 rss: 75Mb L: 19/34 MS: 1 CopyPart- 00:11:50.189 [2024-10-14 17:28:47.105736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.189 [2024-10-14 17:28:47.105762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.189 [2024-10-14 17:28:47.105820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:686800a5 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.189 [2024-10-14 17:28:47.105835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.189 #32 NEW cov: 12430 ft: 15819 corp: 23/533b lim: 35 exec/s: 32 rss: 75Mb L: 18/34 MS: 2 CopyPart-CrossOver- 00:11:50.189 [2024-10-14 17:28:47.145746] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:50.189 [2024-10-14 17:28:47.145993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.189 [2024-10-14 17:28:47.146020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.189 [2024-10-14 17:28:47.146084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.146100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.190 [2024-10-14 17:28:47.146156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:001a0000 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.146172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.190 #33 NEW cov: 12441 ft: 15901 corp: 24/559b lim: 35 exec/s: 33 rss: 75Mb L: 26/34 MS: 1 ChangeBinInt- 00:11:50.190 [2024-10-14 17:28:47.206193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006824 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.206219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.190 [2024-10-14 17:28:47.206277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.206292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.190 [2024-10-14 17:28:47.206351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.206366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.190 #34 NEW cov: 12441 ft: 15910 corp: 25/586b lim: 35 exec/s: 34 rss: 75Mb L: 27/34 MS: 1 InsertByte- 00:11:50.190 [2024-10-14 17:28:47.246416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.246442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.190 [2024-10-14 17:28:47.246499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.246514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.190 [2024-10-14 17:28:47.246571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.246585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.190 [2024-10-14 17:28:47.246640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.190 [2024-10-14 17:28:47.246654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:50.190 #35 NEW cov: 12441 ft: 15925 corp: 26/615b lim: 35 exec/s: 35 rss: 75Mb L: 29/34 MS: 1 CopyPart- 00:11:50.449 [2024-10-14 17:28:47.286554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:00006801 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.286584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.286645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:4c0e004a cdw11:6800cf6d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.286661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.286718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.286732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.286788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.286804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:50.449 #36 NEW cov: 12441 ft: 15936 corp: 27/644b lim: 35 exec/s: 36 rss: 75Mb L: 29/34 MS: 1 PersAutoDict- DE: "\001\000\177JL\016\317m"- 00:11:50.449 [2024-10-14 17:28:47.346331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6868000a cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.346356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.449 #37 NEW cov: 12441 ft: 15963 corp: 28/651b lim: 35 exec/s: 37 rss: 75Mb L: 7/34 MS: 1 CrossOver- 00:11:50.449 [2024-10-14 17:28:47.386369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e700e7 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.386395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.449 #38 NEW cov: 12441 ft: 15976 corp: 29/664b lim: 35 exec/s: 38 rss: 75Mb L: 13/34 MS: 1 ChangeByte- 00:11:50.449 [2024-10-14 17:28:47.446681] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:50.449 [2024-10-14 17:28:47.446918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.446945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.447002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6868003b cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.447018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.447079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:68001a68 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.447096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.449 #39 NEW cov: 12441 ft: 15996 corp: 30/691b lim: 35 exec/s: 39 rss: 75Mb L: 27/34 MS: 1 InsertByte- 00:11:50.449 [2024-10-14 17:28:47.507153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.507179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.507237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.507252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.507310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.507324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.449 [2024-10-14 17:28:47.507382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.449 [2024-10-14 17:28:47.507397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:50.449 #40 NEW cov: 12441 ft: 16002 corp: 31/724b lim: 35 exec/s: 40 rss: 75Mb L: 33/34 MS: 1 CopyPart- 00:11:50.708 [2024-10-14 17:28:47.547001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e70032 cdw11:e700e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.708 [2024-10-14 17:28:47.547032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.708 [2024-10-14 17:28:47.547107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e7e700e7 cdw11:0300e7a3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.708 [2024-10-14 17:28:47.547126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.708 #41 NEW cov: 12441 ft: 16007 corp: 32/744b lim: 35 exec/s: 41 rss: 76Mb L: 20/34 MS: 1 InsertByte- 00:11:50.708 [2024-10-14 17:28:47.587239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6c680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.708 [2024-10-14 17:28:47.587265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.708 [2024-10-14 17:28:47.587323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:a5680068 cdw11:98006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.708 [2024-10-14 17:28:47.587337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.708 [2024-10-14 17:28:47.587395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.708 [2024-10-14 17:28:47.587410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.708 #42 NEW cov: 12441 ft: 16038 corp: 33/770b lim: 35 exec/s: 42 rss: 76Mb L: 26/34 MS: 1 ChangeBit- 00:11:50.708 [2024-10-14 17:28:47.627345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.708 [2024-10-14 17:28:47.627373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.709 [2024-10-14 17:28:47.627431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:01006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.709 [2024-10-14 17:28:47.627446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.709 [2024-10-14 17:28:47.627505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:4a4c007f cdw11:6d000ecf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.709 [2024-10-14 17:28:47.627519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.709 #43 NEW cov: 12441 ft: 16039 corp: 34/796b lim: 35 exec/s: 43 rss: 76Mb L: 26/34 MS: 1 PersAutoDict- DE: "\001\000\177JL\016\317m"- 00:11:50.709 [2024-10-14 17:28:47.667197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:e7e700e7 cdw11:e700e7e6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.709 [2024-10-14 17:28:47.667223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.709 #44 NEW cov: 12441 ft: 16044 corp: 35/809b lim: 35 exec/s: 44 rss: 76Mb L: 13/34 MS: 1 ChangeBit- 00:11:50.709 [2024-10-14 17:28:47.707572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:68680068 cdw11:68006824 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.709 [2024-10-14 17:28:47.707599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:50.709 [2024-10-14 17:28:47.707657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.709 [2024-10-14 17:28:47.707673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:50.709 [2024-10-14 17:28:47.707729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:68680068 cdw11:68006868 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:50.709 [2024-10-14 17:28:47.707743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:50.709 #45 NEW cov: 12441 ft: 16046 corp: 36/836b lim: 35 exec/s: 22 rss: 76Mb L: 27/34 MS: 1 CopyPart- 00:11:50.709 #45 DONE cov: 12441 ft: 16046 corp: 36/836b lim: 35 exec/s: 22 rss: 76Mb 00:11:50.709 ###### Recommended dictionary. ###### 00:11:50.709 "\001\000\177JL\016\317m" # Uses: 4 00:11:50.709 ###### End of recommended dictionary. ###### 00:11:50.709 Done 45 runs in 2 second(s) 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:50.968 17:28:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:11:50.968 [2024-10-14 17:28:47.900280] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:50.968 [2024-10-14 17:28:47.900349] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104531 ] 00:11:51.227 [2024-10-14 17:28:48.095566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.227 [2024-10-14 17:28:48.134448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.227 [2024-10-14 17:28:48.193392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.227 [2024-10-14 17:28:48.209544] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:11:51.227 INFO: Running with entropic power schedule (0xFF, 100). 00:11:51.227 INFO: Seed: 640135232 00:11:51.227 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:51.227 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:51.227 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:11:51.227 INFO: A corpus is not provided, starting from an empty corpus 00:11:51.227 #2 INITED exec/s: 0 rss: 66Mb 00:11:51.227 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:51.227 This may also happen if the target rejected all inputs we tried so far 00:11:51.227 [2024-10-14 17:28:48.275235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.227 [2024-10-14 17:28:48.275270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.744 NEW_FUNC[1/720]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:11:51.744 NEW_FUNC[2/720]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:51.744 #4 NEW cov: 12339 ft: 12323 corp: 2/11b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 2 CrossOver-InsertRepeatedBytes- 00:11:51.745 [2024-10-14 17:28:48.616257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.745 [2024-10-14 17:28:48.616316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.745 #5 NEW cov: 12469 ft: 12970 corp: 3/21b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:11:51.745 [2024-10-14 17:28:48.686147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.745 [2024-10-14 17:28:48.686175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.745 #6 NEW cov: 12475 ft: 13225 corp: 4/31b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:11:51.745 [2024-10-14 17:28:48.726241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.745 [2024-10-14 17:28:48.726268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:51.745 #7 NEW cov: 12560 ft: 13472 corp: 5/41b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:11:51.745 NEW_FUNC[1/2]: 0x14af288 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:784 00:11:51.745 NEW_FUNC[2/2]: 0x14d6d08 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3702 00:11:51.745 #8 NEW cov: 12616 ft: 13754 corp: 6/51b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:11:52.003 #9 NEW cov: 12616 ft: 13894 corp: 7/59b lim: 20 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 CrossOver- 00:11:52.003 #10 NEW cov: 12616 ft: 13977 corp: 8/69b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:11:52.003 [2024-10-14 17:28:48.946942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.003 [2024-10-14 17:28:48.946971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.003 #11 NEW cov: 12616 ft: 14077 corp: 9/79b lim: 20 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:11:52.003 [2024-10-14 17:28:49.007114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.003 [2024-10-14 17:28:49.007141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.003 #12 NEW cov: 12616 ft: 14136 corp: 10/88b lim: 20 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 EraseBytes- 00:11:52.261 #13 NEW cov: 12616 ft: 14208 corp: 11/96b lim: 20 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 CrossOver- 00:11:52.261 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:52.261 #14 NEW cov: 12639 ft: 14326 corp: 12/104b lim: 20 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:11:52.261 [2024-10-14 17:28:49.187722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.261 [2024-10-14 17:28:49.187750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.261 NEW_FUNC[1/1]: 0x155efc8 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3649 00:11:52.261 #17 NEW cov: 12670 ft: 14762 corp: 13/119b lim: 20 exec/s: 0 rss: 74Mb L: 15/15 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:11:52.261 #18 NEW cov: 12670 ft: 15009 corp: 14/123b lim: 20 exec/s: 0 rss: 74Mb L: 4/15 MS: 1 EraseBytes- 00:11:52.261 [2024-10-14 17:28:49.267820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.261 [2024-10-14 17:28:49.267846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.261 #19 NEW cov: 12670 ft: 15035 corp: 15/133b lim: 20 exec/s: 19 rss: 75Mb L: 10/15 MS: 1 InsertByte- 00:11:52.261 [2024-10-14 17:28:49.328290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.261 [2024-10-14 17:28:49.328317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.519 #20 NEW cov: 12687 ft: 15258 corp: 16/152b lim: 20 exec/s: 20 rss: 75Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:11:52.519 [2024-10-14 17:28:49.388153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.519 [2024-10-14 17:28:49.388192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.519 #21 NEW cov: 12687 ft: 15368 corp: 17/162b lim: 20 exec/s: 21 rss: 75Mb L: 10/19 MS: 1 ChangeByte- 00:11:52.519 #22 NEW cov: 12687 ft: 15386 corp: 18/170b lim: 20 exec/s: 22 rss: 75Mb L: 8/19 MS: 1 ChangeBinInt- 00:11:52.519 [2024-10-14 17:28:49.488554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.519 [2024-10-14 17:28:49.488581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.519 #23 NEW cov: 12687 ft: 15422 corp: 19/183b lim: 20 exec/s: 23 rss: 75Mb L: 13/19 MS: 1 CrossOver- 00:11:52.519 #24 NEW cov: 12687 ft: 15492 corp: 20/193b lim: 20 exec/s: 24 rss: 75Mb L: 10/19 MS: 1 ChangeBinInt- 00:11:52.519 #25 NEW cov: 12687 ft: 15529 corp: 21/204b lim: 20 exec/s: 25 rss: 75Mb L: 11/19 MS: 1 InsertByte- 00:11:52.778 #26 NEW cov: 12687 ft: 15538 corp: 22/208b lim: 20 exec/s: 26 rss: 75Mb L: 4/19 MS: 1 ChangeBit- 00:11:52.778 [2024-10-14 17:28:49.668868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.778 [2024-10-14 17:28:49.668895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.778 #27 NEW cov: 12687 ft: 15600 corp: 23/218b lim: 20 exec/s: 27 rss: 75Mb L: 10/19 MS: 1 CrossOver- 00:11:52.778 #28 NEW cov: 12687 ft: 15634 corp: 24/227b lim: 20 exec/s: 28 rss: 75Mb L: 9/19 MS: 1 InsertByte- 00:11:52.778 [2024-10-14 17:28:49.789291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.778 [2024-10-14 17:28:49.789318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:52.778 #29 NEW cov: 12687 ft: 15686 corp: 25/237b lim: 20 exec/s: 29 rss: 75Mb L: 10/19 MS: 1 CrossOver- 00:11:52.778 [2024-10-14 17:28:49.849391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.778 [2024-10-14 17:28:49.849417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.037 #30 NEW cov: 12687 ft: 15692 corp: 26/247b lim: 20 exec/s: 30 rss: 75Mb L: 10/19 MS: 1 ChangeByte- 00:11:53.037 [2024-10-14 17:28:49.889739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.037 [2024-10-14 17:28:49.889766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.037 NEW_FUNC[1/1]: 0x12d1538 in nvmf_ctrlr_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3550 00:11:53.037 #31 NEW cov: 12704 ft: 15770 corp: 27/263b lim: 20 exec/s: 31 rss: 75Mb L: 16/19 MS: 1 CrossOver- 00:11:53.037 [2024-10-14 17:28:49.939667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.037 [2024-10-14 17:28:49.939694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.037 #32 NEW cov: 12704 ft: 15797 corp: 28/273b lim: 20 exec/s: 32 rss: 75Mb L: 10/19 MS: 1 CMP- DE: "\000\006"- 00:11:53.037 [2024-10-14 17:28:49.979774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.037 [2024-10-14 17:28:49.979801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.037 #33 NEW cov: 12704 ft: 15800 corp: 29/284b lim: 20 exec/s: 33 rss: 75Mb L: 11/19 MS: 1 InsertByte- 00:11:53.037 [2024-10-14 17:28:50.040197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.037 [2024-10-14 17:28:50.040225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.037 #34 NEW cov: 12704 ft: 15819 corp: 30/303b lim: 20 exec/s: 34 rss: 75Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:11:53.037 [2024-10-14 17:28:50.100155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.037 [2024-10-14 17:28:50.100185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.037 #35 NEW cov: 12704 ft: 15825 corp: 31/313b lim: 20 exec/s: 35 rss: 75Mb L: 10/19 MS: 1 ChangeByte- 00:11:53.296 #36 NEW cov: 12704 ft: 15834 corp: 32/321b lim: 20 exec/s: 36 rss: 76Mb L: 8/19 MS: 1 ChangeByte- 00:11:53.296 [2024-10-14 17:28:50.180437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.296 [2024-10-14 17:28:50.180484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.296 #37 NEW cov: 12704 ft: 15870 corp: 33/331b lim: 20 exec/s: 37 rss: 76Mb L: 10/19 MS: 1 InsertByte- 00:11:53.296 [2024-10-14 17:28:50.220671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.296 [2024-10-14 17:28:50.220698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.296 #38 NEW cov: 12704 ft: 15884 corp: 34/350b lim: 20 exec/s: 19 rss: 76Mb L: 19/19 MS: 1 ChangeBinInt- 00:11:53.296 #38 DONE cov: 12704 ft: 15884 corp: 34/350b lim: 20 exec/s: 19 rss: 76Mb 00:11:53.296 ###### Recommended dictionary. ###### 00:11:53.296 "\000\006" # Uses: 0 00:11:53.296 ###### End of recommended dictionary. ###### 00:11:53.296 Done 38 runs in 2 second(s) 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:11:53.296 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:53.555 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:53.555 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:53.555 17:28:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:11:53.555 [2024-10-14 17:28:50.421876] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:53.555 [2024-10-14 17:28:50.421947] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104837 ] 00:11:53.555 [2024-10-14 17:28:50.623685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.814 [2024-10-14 17:28:50.663186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.814 [2024-10-14 17:28:50.722080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.814 [2024-10-14 17:28:50.738234] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:11:53.814 INFO: Running with entropic power schedule (0xFF, 100). 00:11:53.814 INFO: Seed: 3167139931 00:11:53.814 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:53.814 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:53.814 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:11:53.814 INFO: A corpus is not provided, starting from an empty corpus 00:11:53.814 #2 INITED exec/s: 0 rss: 66Mb 00:11:53.814 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:53.814 This may also happen if the target rejected all inputs we tried so far 00:11:53.814 [2024-10-14 17:28:50.798200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.814 [2024-10-14 17:28:50.798230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:53.814 [2024-10-14 17:28:50.798288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.814 [2024-10-14 17:28:50.798303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:53.814 [2024-10-14 17:28:50.798358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.814 [2024-10-14 17:28:50.798372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:53.814 [2024-10-14 17:28:50.798426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:53.814 [2024-10-14 17:28:50.798440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.073 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:11:54.073 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:54.073 #3 NEW cov: 12224 ft: 12225 corp: 2/32b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:11:54.073 [2024-10-14 17:28:51.139186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.073 [2024-10-14 17:28:51.139243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.073 [2024-10-14 17:28:51.139325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.073 [2024-10-14 17:28:51.139353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.073 [2024-10-14 17:28:51.139429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.073 [2024-10-14 17:28:51.139454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.073 [2024-10-14 17:28:51.139533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.073 [2024-10-14 17:28:51.139559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.332 #4 NEW cov: 12337 ft: 12914 corp: 3/63b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 ChangeBinInt- 00:11:54.332 [2024-10-14 17:28:51.208546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.208574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.332 #6 NEW cov: 12343 ft: 13952 corp: 4/72b lim: 35 exec/s: 0 rss: 74Mb L: 9/31 MS: 2 CrossOver-CMP- DE: "\015\000\000\000\000\000\000\000"- 00:11:54.332 [2024-10-14 17:28:51.249130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.249156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.249209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:dbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.249224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.249274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.249289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.249340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.249354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.332 #7 NEW cov: 12428 ft: 14297 corp: 5/103b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 ChangeBit- 00:11:54.332 [2024-10-14 17:28:51.289197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.289224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.289283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.289297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.289349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.289364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.289416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.289431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.332 #8 NEW cov: 12428 ft: 14382 corp: 6/134b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 ShuffleBytes- 00:11:54.332 [2024-10-14 17:28:51.348903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:20000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.348929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.332 #9 NEW cov: 12428 ft: 14489 corp: 7/143b lim: 35 exec/s: 0 rss: 74Mb L: 9/31 MS: 1 ChangeBit- 00:11:54.332 [2024-10-14 17:28:51.409690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.409717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.409770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.409785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.409837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.409851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.409900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3c3ccbcb cdw11:3c3c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.409914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.332 [2024-10-14 17:28:51.409968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.332 [2024-10-14 17:28:51.409982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:54.591 #10 NEW cov: 12428 ft: 14621 corp: 8/178b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:54.591 [2024-10-14 17:28:51.449641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:20000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.591 [2024-10-14 17:28:51.449667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.591 [2024-10-14 17:28:51.449720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.591 [2024-10-14 17:28:51.449734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.591 [2024-10-14 17:28:51.449792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.591 [2024-10-14 17:28:51.449807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.591 [2024-10-14 17:28:51.449859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.591 [2024-10-14 17:28:51.449873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.591 #11 NEW cov: 12428 ft: 14669 corp: 9/211b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:11:54.591 [2024-10-14 17:28:51.509946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.591 [2024-10-14 17:28:51.509973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.591 [2024-10-14 17:28:51.510033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.591 [2024-10-14 17:28:51.510048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.591 [2024-10-14 17:28:51.510103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.510118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.592 [2024-10-14 17:28:51.510171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3c3ccbcb cdw11:3c3c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.510185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.592 [2024-10-14 17:28:51.510236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.510250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:54.592 #12 NEW cov: 12428 ft: 14715 corp: 10/246b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:11:54.592 [2024-10-14 17:28:51.569492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.569519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.592 #17 NEW cov: 12428 ft: 14778 corp: 11/257b lim: 35 exec/s: 0 rss: 74Mb L: 11/35 MS: 5 CopyPart-InsertByte-InsertByte-CrossOver-PersAutoDict- DE: "\015\000\000\000\000\000\000\000"- 00:11:54.592 [2024-10-14 17:28:51.610224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.610249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.592 [2024-10-14 17:28:51.610302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.610317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.592 [2024-10-14 17:28:51.610371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.610388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.592 [2024-10-14 17:28:51.610442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3c3ccbcb cdw11:3c3c0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.610456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.592 [2024-10-14 17:28:51.610510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:08cbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.610524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:54.592 #18 NEW cov: 12428 ft: 14807 corp: 12/292b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:11:54.592 [2024-10-14 17:28:51.669779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00200000 cdw11:0d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.592 [2024-10-14 17:28:51.669805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.851 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:54.851 #19 NEW cov: 12451 ft: 14842 corp: 13/301b lim: 35 exec/s: 0 rss: 74Mb L: 9/35 MS: 1 ShuffleBytes- 00:11:54.851 [2024-10-14 17:28:51.719953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:20000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.851 [2024-10-14 17:28:51.719979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.851 #20 NEW cov: 12451 ft: 14870 corp: 14/310b lim: 35 exec/s: 0 rss: 74Mb L: 9/35 MS: 1 ChangeBit- 00:11:54.851 [2024-10-14 17:28:51.760653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.760680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.760732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbffcbcb cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.760746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.760798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.760812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.760866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.760880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.760933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.760947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:54.852 #21 NEW cov: 12451 ft: 14890 corp: 15/345b lim: 35 exec/s: 21 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:54.852 [2024-10-14 17:28:51.800609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.800634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.800691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.800706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.800759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.800774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.800828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbc3cb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.800842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.852 #22 NEW cov: 12451 ft: 14914 corp: 16/376b lim: 35 exec/s: 22 rss: 74Mb L: 31/35 MS: 1 ChangeBit- 00:11:54.852 [2024-10-14 17:28:51.840256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.840282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.852 #23 NEW cov: 12451 ft: 14962 corp: 17/385b lim: 35 exec/s: 23 rss: 74Mb L: 9/35 MS: 1 ChangeBinInt- 00:11:54.852 [2024-10-14 17:28:51.880834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.880861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.880913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.880928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.880982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:35cbcbcb cdw11:c3340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.880996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:54.852 [2024-10-14 17:28:51.881054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcb34 cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.881068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:54.852 #24 NEW cov: 12451 ft: 14977 corp: 18/416b lim: 35 exec/s: 24 rss: 75Mb L: 31/35 MS: 1 ShuffleBytes- 00:11:54.852 [2024-10-14 17:28:51.940588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00200000 cdw11:0d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:54.852 [2024-10-14 17:28:51.940615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.111 #25 NEW cov: 12451 ft: 14990 corp: 19/425b lim: 35 exec/s: 25 rss: 75Mb L: 9/35 MS: 1 CopyPart- 00:11:55.111 [2024-10-14 17:28:52.001154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.001180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.111 [2024-10-14 17:28:52.001234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.001249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.111 [2024-10-14 17:28:52.001305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:35cbcbcb cdw11:c3340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.001320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.111 [2024-10-14 17:28:52.001373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcb34 cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.001386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.111 #26 NEW cov: 12451 ft: 15054 corp: 20/456b lim: 35 exec/s: 26 rss: 75Mb L: 31/35 MS: 1 ChangeByte- 00:11:55.111 [2024-10-14 17:28:52.061178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.061203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.111 [2024-10-14 17:28:52.061259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.061273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.111 [2024-10-14 17:28:52.061328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.111 [2024-10-14 17:28:52.061342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.111 #29 NEW cov: 12451 ft: 15289 corp: 21/479b lim: 35 exec/s: 29 rss: 75Mb L: 23/35 MS: 3 EraseBytes-ChangeByte-InsertRepeatedBytes- 00:11:55.111 [2024-10-14 17:28:52.121650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.121677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.121730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.121744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.121798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3534cbcb cdw11:34340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.121812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.121865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.121879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.121932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:0affcbcb cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.121946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:55.112 #30 NEW cov: 12451 ft: 15309 corp: 22/514b lim: 35 exec/s: 30 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:55.112 [2024-10-14 17:28:52.161602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.161631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.161688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.161702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.161754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:35cbcbcb cdw11:c3340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.161770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.112 [2024-10-14 17:28:52.161822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcb34 cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.161836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.112 #31 NEW cov: 12451 ft: 15320 corp: 23/545b lim: 35 exec/s: 31 rss: 75Mb L: 31/35 MS: 1 ShuffleBytes- 00:11:55.112 [2024-10-14 17:28:52.201286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000d00 cdw11:00200000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.112 [2024-10-14 17:28:52.201313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.371 #32 NEW cov: 12451 ft: 15329 corp: 24/554b lim: 35 exec/s: 32 rss: 75Mb L: 9/35 MS: 1 ShuffleBytes- 00:11:55.371 [2024-10-14 17:28:52.241515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:c3340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.241541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.371 [2024-10-14 17:28:52.241592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcb34 cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.241606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.371 #33 NEW cov: 12451 ft: 15617 corp: 25/571b lim: 35 exec/s: 33 rss: 75Mb L: 17/35 MS: 1 EraseBytes- 00:11:55.371 [2024-10-14 17:28:52.301546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:24000d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.301572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.371 #34 NEW cov: 12451 ft: 15642 corp: 26/581b lim: 35 exec/s: 34 rss: 75Mb L: 10/35 MS: 1 InsertByte- 00:11:55.371 [2024-10-14 17:28:52.362021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.362051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.371 [2024-10-14 17:28:52.362104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffd4ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.362119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.371 [2024-10-14 17:28:52.362170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.362185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.371 #35 NEW cov: 12451 ft: 15655 corp: 27/604b lim: 35 exec/s: 35 rss: 75Mb L: 23/35 MS: 1 ChangeByte- 00:11:55.371 [2024-10-14 17:28:52.422357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.422382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.371 [2024-10-14 17:28:52.422435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cb0dcbcb cdw11:00240000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.422449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.371 [2024-10-14 17:28:52.422499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:20000000 cdw11:000a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.422513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.371 [2024-10-14 17:28:52.422562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcb34 cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.371 [2024-10-14 17:28:52.422575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.630 #36 NEW cov: 12451 ft: 15754 corp: 28/635b lim: 35 exec/s: 36 rss: 75Mb L: 31/35 MS: 1 CrossOver- 00:11:55.630 [2024-10-14 17:28:52.482080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:20000a0d cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.482106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.630 #37 NEW cov: 12451 ft: 15770 corp: 29/644b lim: 35 exec/s: 37 rss: 75Mb L: 9/35 MS: 1 ChangeByte- 00:11:55.630 [2024-10-14 17:28:52.522195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:39000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.522220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.630 #38 NEW cov: 12451 ft: 15822 corp: 30/653b lim: 35 exec/s: 38 rss: 75Mb L: 9/35 MS: 1 ChangeByte- 00:11:55.630 [2024-10-14 17:28:52.562268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.562293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.622484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.622509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.630 #40 NEW cov: 12451 ft: 15830 corp: 31/662b lim: 35 exec/s: 40 rss: 75Mb L: 9/35 MS: 2 ChangeBit-PersAutoDict- DE: "\015\000\000\000\000\000\000\000"- 00:11:55.630 [2024-10-14 17:28:52.663023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.663052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.663105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.663119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.663172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:35cbcbcb cdw11:cb340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.663190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.663242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.663256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.630 #41 NEW cov: 12451 ft: 15842 corp: 32/690b lim: 35 exec/s: 41 rss: 75Mb L: 28/35 MS: 1 EraseBytes- 00:11:55.630 [2024-10-14 17:28:52.703120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.703145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.703196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.703211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.703261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:35cbcbcb cdw11:c3340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.703275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.630 [2024-10-14 17:28:52.703327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcb34 cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.630 [2024-10-14 17:28:52.703341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.890 #42 NEW cov: 12451 ft: 15857 corp: 33/722b lim: 35 exec/s: 42 rss: 75Mb L: 32/35 MS: 1 InsertByte- 00:11:55.890 [2024-10-14 17:28:52.743430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.890 [2024-10-14 17:28:52.743455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.890 [2024-10-14 17:28:52.743507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.890 [2024-10-14 17:28:52.743522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.890 [2024-10-14 17:28:52.743570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:35cbcbcb cdw11:c3340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.890 [2024-10-14 17:28:52.743584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.891 [2024-10-14 17:28:52.743634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff34ffff cdw11:cb340003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.891 [2024-10-14 17:28:52.743648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.891 [2024-10-14 17:28:52.743699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.891 [2024-10-14 17:28:52.743713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:55.891 #43 NEW cov: 12451 ft: 15862 corp: 34/757b lim: 35 exec/s: 43 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:11:55.891 [2024-10-14 17:28:52.783367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:cbcbcfcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.891 [2024-10-14 17:28:52.783395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:55.891 [2024-10-14 17:28:52.783447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:cbcbcbcb cdw11:dbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.891 [2024-10-14 17:28:52.783461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:55.891 [2024-10-14 17:28:52.783512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.891 [2024-10-14 17:28:52.783527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:55.891 [2024-10-14 17:28:52.783579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:cbcbcbcb cdw11:cbcb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:55.891 [2024-10-14 17:28:52.783592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:55.891 #44 NEW cov: 12451 ft: 15895 corp: 35/788b lim: 35 exec/s: 22 rss: 75Mb L: 31/35 MS: 1 ChangeBit- 00:11:55.891 #44 DONE cov: 12451 ft: 15895 corp: 35/788b lim: 35 exec/s: 22 rss: 75Mb 00:11:55.891 ###### Recommended dictionary. ###### 00:11:55.891 "\015\000\000\000\000\000\000\000" # Uses: 2 00:11:55.891 ###### End of recommended dictionary. ###### 00:11:55.891 Done 44 runs in 2 second(s) 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:55.891 17:28:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:11:55.891 [2024-10-14 17:28:52.976264] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:55.891 [2024-10-14 17:28:52.976336] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105126 ] 00:11:56.150 [2024-10-14 17:28:53.171931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.150 [2024-10-14 17:28:53.210410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.409 [2024-10-14 17:28:53.269848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.409 [2024-10-14 17:28:53.285992] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:11:56.409 INFO: Running with entropic power schedule (0xFF, 100). 00:11:56.409 INFO: Seed: 1421160170 00:11:56.409 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:56.409 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:56.409 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:11:56.409 INFO: A corpus is not provided, starting from an empty corpus 00:11:56.409 #2 INITED exec/s: 0 rss: 66Mb 00:11:56.409 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:56.409 This may also happen if the target rejected all inputs we tried so far 00:11:56.409 [2024-10-14 17:28:53.354845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.409 [2024-10-14 17:28:53.354884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.409 [2024-10-14 17:28:53.354971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.409 [2024-10-14 17:28:53.354987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.409 [2024-10-14 17:28:53.355077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.409 [2024-10-14 17:28:53.355093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.409 [2024-10-14 17:28:53.355178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.409 [2024-10-14 17:28:53.355193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:56.668 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:11:56.668 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:56.668 #5 NEW cov: 12235 ft: 12235 corp: 2/39b lim: 45 exec/s: 0 rss: 74Mb L: 38/38 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:11:56.668 [2024-10-14 17:28:53.695362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.668 [2024-10-14 17:28:53.695405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.668 [2024-10-14 17:28:53.695497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.668 [2024-10-14 17:28:53.695513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.668 [2024-10-14 17:28:53.695602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.668 [2024-10-14 17:28:53.695620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.668 #6 NEW cov: 12348 ft: 13214 corp: 3/70b lim: 45 exec/s: 0 rss: 74Mb L: 31/38 MS: 1 EraseBytes- 00:11:56.927 [2024-10-14 17:28:53.765622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.765650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.765746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.765761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.765852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.765867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.927 #12 NEW cov: 12354 ft: 13412 corp: 4/101b lim: 45 exec/s: 0 rss: 74Mb L: 31/38 MS: 1 ChangeBit- 00:11:56.927 [2024-10-14 17:28:53.836279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.836307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.836400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.836415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.836510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.836526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.836617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c52a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.836632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:56.927 #13 NEW cov: 12439 ft: 13703 corp: 5/140b lim: 45 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 InsertByte- 00:11:56.927 [2024-10-14 17:28:53.886002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.886032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.886125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.886141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.886228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.886244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.927 #14 NEW cov: 12439 ft: 13788 corp: 6/171b lim: 45 exec/s: 0 rss: 74Mb L: 31/39 MS: 1 ShuffleBytes- 00:11:56.927 [2024-10-14 17:28:53.956263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.956289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.956378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.927 [2024-10-14 17:28:53.956393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:56.927 [2024-10-14 17:28:53.956495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:56.928 [2024-10-14 17:28:53.956511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:56.928 #15 NEW cov: 12439 ft: 13907 corp: 7/202b lim: 45 exec/s: 0 rss: 74Mb L: 31/39 MS: 1 ShuffleBytes- 00:11:57.187 [2024-10-14 17:28:54.027000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.027031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.027121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.027136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.027221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.027235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.027326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.027340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.187 #21 NEW cov: 12439 ft: 13971 corp: 8/238b lim: 45 exec/s: 0 rss: 74Mb L: 36/39 MS: 1 CrossOver- 00:11:57.187 [2024-10-14 17:28:54.076022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.076053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.187 #22 NEW cov: 12439 ft: 14727 corp: 9/252b lim: 45 exec/s: 0 rss: 74Mb L: 14/39 MS: 1 InsertRepeatedBytes- 00:11:57.187 [2024-10-14 17:28:54.137152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.137178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.137267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.137284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.137378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.137393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.187 #23 NEW cov: 12439 ft: 14779 corp: 10/283b lim: 45 exec/s: 0 rss: 75Mb L: 31/39 MS: 1 ChangeByte- 00:11:57.187 [2024-10-14 17:28:54.207779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.207807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.207904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.207920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.208010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.208030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.208124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c585c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.208139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.187 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:57.187 #24 NEW cov: 12462 ft: 14861 corp: 11/324b lim: 45 exec/s: 0 rss: 75Mb L: 41/41 MS: 1 CopyPart- 00:11:57.187 [2024-10-14 17:28:54.267990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5850006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.268018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.268121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.268137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.268234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.268250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.187 [2024-10-14 17:28:54.268337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c585 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.187 [2024-10-14 17:28:54.268353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.447 #25 NEW cov: 12462 ft: 14886 corp: 12/365b lim: 45 exec/s: 0 rss: 75Mb L: 41/41 MS: 1 CopyPart- 00:11:57.447 [2024-10-14 17:28:54.337623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.337652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.447 [2024-10-14 17:28:54.337745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.337761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.447 #26 NEW cov: 12462 ft: 15115 corp: 13/388b lim: 45 exec/s: 26 rss: 75Mb L: 23/41 MS: 1 EraseBytes- 00:11:57.447 [2024-10-14 17:28:54.388019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.388056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.447 [2024-10-14 17:28:54.388144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.388167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.447 [2024-10-14 17:28:54.388265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.388291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.447 #27 NEW cov: 12462 ft: 15142 corp: 14/419b lim: 45 exec/s: 27 rss: 75Mb L: 31/41 MS: 1 ChangeBit- 00:11:57.447 [2024-10-14 17:28:54.457846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:6d6d3a6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.457875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.447 #28 NEW cov: 12462 ft: 15155 corp: 15/433b lim: 45 exec/s: 28 rss: 75Mb L: 14/41 MS: 1 ChangeByte- 00:11:57.447 [2024-10-14 17:28:54.528534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.447 [2024-10-14 17:28:54.528564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.707 #33 NEW cov: 12462 ft: 15231 corp: 16/442b lim: 45 exec/s: 33 rss: 75Mb L: 9/41 MS: 5 CrossOver-EraseBytes-EraseBytes-ChangeByte-CrossOver- 00:11:57.707 [2024-10-14 17:28:54.579658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.579685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.579797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.579816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.579913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:85c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.579928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.707 #34 NEW cov: 12462 ft: 15251 corp: 17/474b lim: 45 exec/s: 34 rss: 75Mb L: 32/41 MS: 1 InsertByte- 00:11:57.707 [2024-10-14 17:28:54.649247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:6d6d3a6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.649276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.707 #35 NEW cov: 12462 ft: 15263 corp: 18/488b lim: 45 exec/s: 35 rss: 75Mb L: 14/41 MS: 1 ShuffleBytes- 00:11:57.707 [2024-10-14 17:28:54.720545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.720573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.720661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.720677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.720785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.720801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.720890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c52a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.720904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.707 #36 NEW cov: 12462 ft: 15287 corp: 19/528b lim: 45 exec/s: 36 rss: 75Mb L: 40/41 MS: 1 CopyPart- 00:11:57.707 [2024-10-14 17:28:54.790612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.790638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.790731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.790747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.707 [2024-10-14 17:28:54.790850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c585c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.707 [2024-10-14 17:28:54.790865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.966 #37 NEW cov: 12462 ft: 15294 corp: 20/560b lim: 45 exec/s: 37 rss: 75Mb L: 32/41 MS: 1 InsertByte- 00:11:57.966 [2024-10-14 17:28:54.840224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c537 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:54.840251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.966 #38 NEW cov: 12462 ft: 15303 corp: 21/569b lim: 45 exec/s: 38 rss: 75Mb L: 9/41 MS: 1 ChangeBinInt- 00:11:57.966 [2024-10-14 17:28:54.910702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3a6d3a6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:54.910729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.966 #44 NEW cov: 12462 ft: 15315 corp: 22/583b lim: 45 exec/s: 44 rss: 75Mb L: 14/41 MS: 1 CopyPart- 00:11:57.966 [2024-10-14 17:28:54.982211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:54.982238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.966 [2024-10-14 17:28:54.982328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:54.982342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.966 [2024-10-14 17:28:54.982448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:54.982464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:57.966 [2024-10-14 17:28:54.982554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c52a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:54.982570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:57.966 #45 NEW cov: 12462 ft: 15320 corp: 23/623b lim: 45 exec/s: 45 rss: 75Mb L: 40/41 MS: 1 ChangeBinInt- 00:11:57.966 [2024-10-14 17:28:55.052236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:55.052267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:57.966 [2024-10-14 17:28:55.052361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5850006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:55.052377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:57.966 [2024-10-14 17:28:55.052465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:57.966 [2024-10-14 17:28:55.052481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.226 #51 NEW cov: 12462 ft: 15330 corp: 24/654b lim: 45 exec/s: 51 rss: 75Mb L: 31/41 MS: 1 CopyPart- 00:11:58.226 [2024-10-14 17:28:55.103153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.103181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.103265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.103280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.103370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.103386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.103477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c52a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.103491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.226 #52 NEW cov: 12462 ft: 15347 corp: 25/693b lim: 45 exec/s: 52 rss: 75Mb L: 39/41 MS: 1 CMP- DE: "\377\377\377\377\000\000\000\000"- 00:11:58.226 [2024-10-14 17:28:55.153242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c5c5c5bc cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.153269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.153359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.153375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.153459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.153474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.153559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c5c5c5c5 cdw11:c5c50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.153574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:58.226 #53 NEW cov: 12462 ft: 15401 corp: 26/731b lim: 45 exec/s: 53 rss: 75Mb L: 38/41 MS: 1 ChangeBinInt- 00:11:58.226 [2024-10-14 17:28:55.202606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.202632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.226 #54 NEW cov: 12462 ft: 15435 corp: 27/740b lim: 45 exec/s: 54 rss: 75Mb L: 9/41 MS: 1 EraseBytes- 00:11:58.226 [2024-10-14 17:28:55.252642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:6d6d3a6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.252668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.226 #55 NEW cov: 12462 ft: 15438 corp: 28/755b lim: 45 exec/s: 55 rss: 75Mb L: 15/41 MS: 1 CopyPart- 00:11:58.226 [2024-10-14 17:28:55.303617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:6d6d3a6d cdw11:6d6d0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.303645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.303735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.303750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:58.226 [2024-10-14 17:28:55.303834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:a7a7a7a7 cdw11:a7a70005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:11:58.226 [2024-10-14 17:28:55.303849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:58.485 #56 NEW cov: 12462 ft: 15442 corp: 29/790b lim: 45 exec/s: 28 rss: 75Mb L: 35/41 MS: 1 InsertRepeatedBytes- 00:11:58.485 #56 DONE cov: 12462 ft: 15442 corp: 29/790b lim: 45 exec/s: 28 rss: 75Mb 00:11:58.486 ###### Recommended dictionary. ###### 00:11:58.486 "\377\377\377\377\000\000\000\000" # Uses: 0 00:11:58.486 ###### End of recommended dictionary. ###### 00:11:58.486 Done 56 runs in 2 second(s) 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:11:58.486 17:28:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:11:58.486 [2024-10-14 17:28:55.486510] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:11:58.486 [2024-10-14 17:28:55.486580] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105444 ] 00:11:58.745 [2024-10-14 17:28:55.681985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.745 [2024-10-14 17:28:55.721909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.745 [2024-10-14 17:28:55.781185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.745 [2024-10-14 17:28:55.797334] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:11:58.745 INFO: Running with entropic power schedule (0xFF, 100). 00:11:58.745 INFO: Seed: 3933167571 00:11:59.004 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:11:59.005 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:11:59.005 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:11:59.005 INFO: A corpus is not provided, starting from an empty corpus 00:11:59.005 #2 INITED exec/s: 0 rss: 66Mb 00:11:59.005 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:11:59.005 This may also happen if the target rejected all inputs we tried so far 00:11:59.005 [2024-10-14 17:28:55.862783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:11:59.005 [2024-10-14 17:28:55.862816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.264 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:11:59.264 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:11:59.264 #4 NEW cov: 12134 ft: 12134 corp: 2/3b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 2 ChangeBit-CopyPart- 00:11:59.264 [2024-10-14 17:28:56.195324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000370e cdw11:00000000 00:11:59.264 [2024-10-14 17:28:56.195372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.264 #5 NEW cov: 12264 ft: 12610 corp: 3/6b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 InsertByte- 00:11:59.264 [2024-10-14 17:28:56.265481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:11:59.264 [2024-10-14 17:28:56.265507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.264 #6 NEW cov: 12270 ft: 12978 corp: 4/9b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 InsertByte- 00:11:59.264 [2024-10-14 17:28:56.315696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eeb cdw11:00000000 00:11:59.264 [2024-10-14 17:28:56.315723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.264 #7 NEW cov: 12355 ft: 13229 corp: 5/11b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 ChangeByte- 00:11:59.523 [2024-10-14 17:28:56.365890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000370e cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.365917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.523 #8 NEW cov: 12355 ft: 13270 corp: 6/14b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 ShuffleBytes- 00:11:59.523 [2024-10-14 17:28:56.436463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.436490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.523 #9 NEW cov: 12355 ft: 13353 corp: 7/16b lim: 10 exec/s: 0 rss: 74Mb L: 2/3 MS: 1 ShuffleBytes- 00:11:59.523 [2024-10-14 17:28:56.487635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a48 cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.487662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.523 [2024-10-14 17:28:56.487743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.487759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.523 [2024-10-14 17:28:56.487864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.487880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.523 [2024-10-14 17:28:56.487966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.487979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:59.523 #10 NEW cov: 12355 ft: 13717 corp: 8/25b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "H\000\000\000\000\000\000\000"- 00:11:59.523 [2024-10-14 17:28:56.547124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000360e cdw11:00000000 00:11:59.523 [2024-10-14 17:28:56.547153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.524 #11 NEW cov: 12355 ft: 13765 corp: 9/28b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 ChangeASCIIInt- 00:11:59.782 [2024-10-14 17:28:56.617634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000360e cdw11:00000000 00:11:59.782 [2024-10-14 17:28:56.617661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.782 #12 NEW cov: 12355 ft: 13872 corp: 10/31b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 CrossOver- 00:11:59.782 [2024-10-14 17:28:56.687836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003744 cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.687861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.783 #13 NEW cov: 12355 ft: 13908 corp: 11/34b lim: 10 exec/s: 0 rss: 74Mb L: 3/9 MS: 1 ChangeByte- 00:11:59.783 [2024-10-14 17:28:56.738169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000f0e cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.738195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.783 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:11:59.783 #14 NEW cov: 12378 ft: 13947 corp: 12/36b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 ChangeBit- 00:11:59.783 [2024-10-14 17:28:56.788476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.788502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.783 #15 NEW cov: 12378 ft: 13951 corp: 13/38b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 InsertByte- 00:11:59.783 [2024-10-14 17:28:56.839931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.839961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:11:59.783 [2024-10-14 17:28:56.840048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.840063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:11:59.783 [2024-10-14 17:28:56.840146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.840160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:11:59.783 [2024-10-14 17:28:56.840249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.840265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:11:59.783 [2024-10-14 17:28:56.840344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:11:59.783 [2024-10-14 17:28:56.840359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:11:59.783 #16 NEW cov: 12378 ft: 14024 corp: 14/48b lim: 10 exec/s: 16 rss: 74Mb L: 10/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:12:00.041 [2024-10-14 17:28:56.890490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.041 [2024-10-14 17:28:56.890518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.041 [2024-10-14 17:28:56.890604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.041 [2024-10-14 17:28:56.890621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.041 [2024-10-14 17:28:56.890704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.041 [2024-10-14 17:28:56.890719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.041 [2024-10-14 17:28:56.890794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.041 [2024-10-14 17:28:56.890810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:56.890896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:12:00.042 [2024-10-14 17:28:56.890912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:00.042 #19 NEW cov: 12378 ft: 14060 corp: 15/58b lim: 10 exec/s: 19 rss: 74Mb L: 10/10 MS: 3 EraseBytes-CrossOver-InsertRepeatedBytes- 00:12:00.042 [2024-10-14 17:28:56.959589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003644 cdw11:00000000 00:12:00.042 [2024-10-14 17:28:56.959617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.042 #20 NEW cov: 12378 ft: 14101 corp: 16/61b lim: 10 exec/s: 20 rss: 74Mb L: 3/10 MS: 1 ChangeBit- 00:12:00.042 [2024-10-14 17:28:57.030921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aa9 cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.030952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.031035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.031064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.031163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.031177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.031281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.031296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.042 #21 NEW cov: 12378 ft: 14151 corp: 17/69b lim: 10 exec/s: 21 rss: 74Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:12:00.042 [2024-10-14 17:28:57.081420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.081449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.081551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.081567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.081656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.081670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.081756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.081770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.042 [2024-10-14 17:28:57.081857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:12:00.042 [2024-10-14 17:28:57.081871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:00.042 #22 NEW cov: 12378 ft: 14193 corp: 18/79b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:12:00.302 [2024-10-14 17:28:57.150883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e26 cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.150910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.302 #23 NEW cov: 12378 ft: 14266 corp: 19/82b lim: 10 exec/s: 23 rss: 74Mb L: 3/10 MS: 1 InsertByte- 00:12:00.302 [2024-10-14 17:28:57.221279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.221304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.302 #24 NEW cov: 12378 ft: 14284 corp: 20/84b lim: 10 exec/s: 24 rss: 75Mb L: 2/10 MS: 1 CopyPart- 00:12:00.302 [2024-10-14 17:28:57.292570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004c0a cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.292596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.302 [2024-10-14 17:28:57.292677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.292692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.302 [2024-10-14 17:28:57.292799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.292817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.302 [2024-10-14 17:28:57.292905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.292921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.302 #27 NEW cov: 12378 ft: 14291 corp: 21/92b lim: 10 exec/s: 27 rss: 75Mb L: 8/10 MS: 3 ChangeBit-ChangeByte-CrossOver- 00:12:00.302 [2024-10-14 17:28:57.342002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.302 [2024-10-14 17:28:57.342032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.302 #28 NEW cov: 12378 ft: 14320 corp: 22/94b lim: 10 exec/s: 28 rss: 75Mb L: 2/10 MS: 1 EraseBytes- 00:12:00.561 [2024-10-14 17:28:57.413107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.413133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.413217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006363 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.413233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.413321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006363 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.413336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.413424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00006363 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.413438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.561 #29 NEW cov: 12378 ft: 14335 corp: 23/103b lim: 10 exec/s: 29 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:12:00.561 [2024-10-14 17:28:57.482477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.482503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.561 #30 NEW cov: 12378 ft: 14411 corp: 24/106b lim: 10 exec/s: 30 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:12:00.561 [2024-10-14 17:28:57.553529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.553555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.553638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000a5a5 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.553653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.553752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a5a5 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.553769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.561 #31 NEW cov: 12378 ft: 14578 corp: 25/113b lim: 10 exec/s: 31 rss: 75Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:12:00.561 [2024-10-14 17:28:57.623795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.623823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.623915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000025a5 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.623932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.561 [2024-10-14 17:28:57.624019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a5a5 cdw11:00000000 00:12:00.561 [2024-10-14 17:28:57.624039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.820 #32 NEW cov: 12378 ft: 14594 corp: 26/120b lim: 10 exec/s: 32 rss: 75Mb L: 7/10 MS: 1 ChangeBit- 00:12:00.820 [2024-10-14 17:28:57.693984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003636 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.694010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.694103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.694118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.694209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000e0e cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.694223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.820 #33 NEW cov: 12378 ft: 14610 corp: 27/126b lim: 10 exec/s: 33 rss: 75Mb L: 6/10 MS: 1 CrossOver- 00:12:00.820 [2024-10-14 17:28:57.764735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004800 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.764761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.764845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.764860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.764945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.764959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.765043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.765058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:00.820 #37 NEW cov: 12378 ft: 14618 corp: 28/135b lim: 10 exec/s: 37 rss: 75Mb L: 9/10 MS: 4 ChangeByte-ChangeBinInt-ChangeBinInt-PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:12:00.820 [2024-10-14 17:28:57.814949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003601 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.814974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.815065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.815080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.815180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.815195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.820 #38 NEW cov: 12378 ft: 14631 corp: 29/142b lim: 10 exec/s: 38 rss: 75Mb L: 7/10 MS: 1 CMP- DE: "\001\000\000\000"- 00:12:00.820 [2024-10-14 17:28:57.865515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.865540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.865627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006363 cdw11:00000000 00:12:00.820 [2024-10-14 17:28:57.865641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:00.820 [2024-10-14 17:28:57.865727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00006363 cdw11:00000000 00:12:00.821 [2024-10-14 17:28:57.865742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:00.821 #39 NEW cov: 12378 ft: 14637 corp: 30/148b lim: 10 exec/s: 19 rss: 75Mb L: 6/10 MS: 1 EraseBytes- 00:12:00.821 #39 DONE cov: 12378 ft: 14637 corp: 30/148b lim: 10 exec/s: 19 rss: 75Mb 00:12:00.821 ###### Recommended dictionary. ###### 00:12:00.821 "H\000\000\000\000\000\000\000" # Uses: 1 00:12:00.821 "\377\377\377\377\377\377\377\377" # Uses: 0 00:12:00.821 "\001\000\000\000" # Uses: 0 00:12:00.821 ###### End of recommended dictionary. ###### 00:12:00.821 Done 39 runs in 2 second(s) 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:01.079 17:28:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:12:01.079 [2024-10-14 17:28:58.066931] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:01.079 [2024-10-14 17:28:58.067001] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105802 ] 00:12:01.338 [2024-10-14 17:28:58.255273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.338 [2024-10-14 17:28:58.295225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.338 [2024-10-14 17:28:58.354325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.338 [2024-10-14 17:28:58.370490] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:12:01.338 INFO: Running with entropic power schedule (0xFF, 100). 00:12:01.338 INFO: Seed: 2210187493 00:12:01.338 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:01.338 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:01.338 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:12:01.338 INFO: A corpus is not provided, starting from an empty corpus 00:12:01.338 #2 INITED exec/s: 0 rss: 66Mb 00:12:01.338 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:01.338 This may also happen if the target rejected all inputs we tried so far 00:12:01.338 [2024-10-14 17:28:58.426036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c10a cdw11:00000000 00:12:01.338 [2024-10-14 17:28:58.426070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.856 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:12:01.856 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:01.856 #9 NEW cov: 12152 ft: 12147 corp: 2/3b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 2 ChangeByte-CrossOver- 00:12:01.856 [2024-10-14 17:28:58.767463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.767523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.856 [2024-10-14 17:28:58.767603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.767630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.856 [2024-10-14 17:28:58.767706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.767732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:01.856 [2024-10-14 17:28:58.767807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000007c cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.767833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:01.856 #11 NEW cov: 12265 ft: 13218 corp: 3/11b lim: 10 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 ChangeByte-InsertRepeatedBytes- 00:12:01.856 [2024-10-14 17:28:58.816908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c172 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.816934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.856 #12 NEW cov: 12271 ft: 13491 corp: 4/14b lim: 10 exec/s: 0 rss: 74Mb L: 3/8 MS: 1 InsertByte- 00:12:01.856 [2024-10-14 17:28:58.877216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.877243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:01.856 [2024-10-14 17:28:58.877295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.877310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:01.856 #13 NEW cov: 12356 ft: 13927 corp: 5/19b lim: 10 exec/s: 0 rss: 74Mb L: 5/8 MS: 1 EraseBytes- 00:12:01.856 [2024-10-14 17:28:58.937254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000830 cdw11:00000000 00:12:01.856 [2024-10-14 17:28:58.937279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.117 #15 NEW cov: 12356 ft: 14103 corp: 6/21b lim: 10 exec/s: 0 rss: 74Mb L: 2/8 MS: 2 ChangeBit-InsertByte- 00:12:02.117 [2024-10-14 17:28:58.977450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac2 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:58.977475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:58.977531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:58.977545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.117 #16 NEW cov: 12356 ft: 14171 corp: 7/26b lim: 10 exec/s: 0 rss: 74Mb L: 5/8 MS: 1 InsertRepeatedBytes- 00:12:02.117 [2024-10-14 17:28:59.017426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f0a cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.017452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.117 #17 NEW cov: 12356 ft: 14256 corp: 8/28b lim: 10 exec/s: 0 rss: 74Mb L: 2/8 MS: 1 InsertByte- 00:12:02.117 [2024-10-14 17:28:59.057951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c100 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.057978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.058038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.058053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.058104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.058118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.058171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.058185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.117 #18 NEW cov: 12356 ft: 14330 corp: 9/36b lim: 10 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:12:02.117 [2024-10-14 17:28:59.098143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.098168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.098223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001b1b cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.098237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.098289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001b1b cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.098304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.098356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001b00 cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.098373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.117 [2024-10-14 17:28:59.098427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000007c cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.098441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:02.117 #19 NEW cov: 12356 ft: 14399 corp: 10/46b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:12:02.117 [2024-10-14 17:28:59.157789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c14a cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.157815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.117 #20 NEW cov: 12356 ft: 14491 corp: 11/48b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:12:02.117 [2024-10-14 17:28:59.197949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c34a cdw11:00000000 00:12:02.117 [2024-10-14 17:28:59.197974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.376 #21 NEW cov: 12356 ft: 14524 corp: 12/50b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ChangeBinInt- 00:12:02.376 [2024-10-14 17:28:59.258075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000030 cdw11:00000000 00:12:02.376 [2024-10-14 17:28:59.258101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.376 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:02.376 #22 NEW cov: 12379 ft: 14600 corp: 13/52b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:12:02.376 [2024-10-14 17:28:59.318777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000038 cdw11:00000000 00:12:02.376 [2024-10-14 17:28:59.318805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.376 [2024-10-14 17:28:59.318859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001b1b cdw11:00000000 00:12:02.376 [2024-10-14 17:28:59.318874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.376 [2024-10-14 17:28:59.318928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001b1b cdw11:00000000 00:12:02.376 [2024-10-14 17:28:59.318944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.376 [2024-10-14 17:28:59.318996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001b00 cdw11:00000000 00:12:02.376 [2024-10-14 17:28:59.319011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.377 [2024-10-14 17:28:59.319065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000007c cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.319079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:02.377 #23 NEW cov: 12379 ft: 14655 corp: 14/62b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:12:02.377 [2024-10-14 17:28:59.378440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000060d1 cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.378465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.377 #27 NEW cov: 12379 ft: 14689 corp: 15/64b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 4 ChangeBit-ChangeByte-ChangeBinInt-InsertByte- 00:12:02.377 [2024-10-14 17:28:59.418923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.418951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.377 [2024-10-14 17:28:59.419007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.419021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.377 [2024-10-14 17:28:59.419081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.419096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.377 [2024-10-14 17:28:59.419150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.419163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.377 #28 NEW cov: 12379 ft: 14743 corp: 16/73b lim: 10 exec/s: 28 rss: 74Mb L: 9/10 MS: 1 InsertByte- 00:12:02.377 [2024-10-14 17:28:59.458653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c300 cdw11:00000000 00:12:02.377 [2024-10-14 17:28:59.458679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.636 #29 NEW cov: 12379 ft: 14772 corp: 17/75b lim: 10 exec/s: 29 rss: 74Mb L: 2/10 MS: 1 CrossOver- 00:12:02.636 [2024-10-14 17:28:59.519188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c100 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.519215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.519271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.519285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.519337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.519352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.519406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000040 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.519420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.636 #30 NEW cov: 12379 ft: 14786 corp: 18/83b lim: 10 exec/s: 30 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:12:02.636 [2024-10-14 17:28:59.579381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.579410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.579465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000060d1 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.579479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.579533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.579548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.579600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.579617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.636 #31 NEW cov: 12379 ft: 14808 corp: 19/92b lim: 10 exec/s: 31 rss: 74Mb L: 9/10 MS: 1 CrossOver- 00:12:02.636 [2024-10-14 17:28:59.639256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac2 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.639282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.639337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.639352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.636 #32 NEW cov: 12379 ft: 14864 corp: 20/97b lim: 10 exec/s: 32 rss: 74Mb L: 5/10 MS: 1 CopyPart- 00:12:02.636 [2024-10-14 17:28:59.699403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac2 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.699431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.636 [2024-10-14 17:28:59.699484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000028c2 cdw11:00000000 00:12:02.636 [2024-10-14 17:28:59.699498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.636 #33 NEW cov: 12379 ft: 14869 corp: 21/102b lim: 10 exec/s: 33 rss: 74Mb L: 5/10 MS: 1 ChangeByte- 00:12:02.895 [2024-10-14 17:28:59.739430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.739457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.895 #37 NEW cov: 12379 ft: 14877 corp: 22/104b lim: 10 exec/s: 37 rss: 74Mb L: 2/10 MS: 4 ShuffleBytes-ChangeBit-CopyPart-CopyPart- 00:12:02.895 [2024-10-14 17:28:59.779902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.779929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.895 [2024-10-14 17:28:59.779981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000060d1 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.779996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.895 [2024-10-14 17:28:59.780055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.780073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:02.895 [2024-10-14 17:28:59.780126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000060d1 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.780139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:02.895 #38 NEW cov: 12379 ft: 14892 corp: 23/113b lim: 10 exec/s: 38 rss: 74Mb L: 9/10 MS: 1 CopyPart- 00:12:02.895 [2024-10-14 17:28:59.839724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000030 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.839751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.895 #39 NEW cov: 12379 ft: 14906 corp: 24/115b lim: 10 exec/s: 39 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:12:02.895 [2024-10-14 17:28:59.899856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000080a cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.899882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.895 #40 NEW cov: 12379 ft: 14986 corp: 25/117b lim: 10 exec/s: 40 rss: 74Mb L: 2/10 MS: 1 CrossOver- 00:12:02.895 [2024-10-14 17:28:59.960203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.960229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:02.895 [2024-10-14 17:28:59.960282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c20a cdw11:00000000 00:12:02.895 [2024-10-14 17:28:59.960297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:02.895 #41 NEW cov: 12379 ft: 15034 corp: 26/122b lim: 10 exec/s: 41 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:12:03.154 [2024-10-14 17:29:00.000338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.000364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.154 [2024-10-14 17:29:00.000420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001b1b cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.000434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.154 #42 NEW cov: 12379 ft: 15038 corp: 27/127b lim: 10 exec/s: 42 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:12:03.154 [2024-10-14 17:29:00.040382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000720a cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.040417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.154 #43 NEW cov: 12379 ft: 15063 corp: 28/129b lim: 10 exec/s: 43 rss: 75Mb L: 2/10 MS: 1 EraseBytes- 00:12:03.154 [2024-10-14 17:29:00.100910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.100943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.154 [2024-10-14 17:29:00.100997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000060d1 cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.101012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.154 [2024-10-14 17:29:00.101068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.101083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:03.154 [2024-10-14 17:29:00.101135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000060d1 cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.101149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:03.154 #44 NEW cov: 12379 ft: 15100 corp: 29/138b lim: 10 exec/s: 44 rss: 75Mb L: 9/10 MS: 1 CopyPart- 00:12:03.154 [2024-10-14 17:29:00.160651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004d0a cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.160680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.154 #45 NEW cov: 12379 ft: 15122 corp: 30/140b lim: 10 exec/s: 45 rss: 75Mb L: 2/10 MS: 1 ChangeByte- 00:12:03.154 [2024-10-14 17:29:00.220831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007260 cdw11:00000000 00:12:03.154 [2024-10-14 17:29:00.220858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.154 #46 NEW cov: 12379 ft: 15155 corp: 31/142b lim: 10 exec/s: 46 rss: 75Mb L: 2/10 MS: 1 CrossOver- 00:12:03.413 [2024-10-14 17:29:00.260944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000720a cdw11:00000000 00:12:03.413 [2024-10-14 17:29:00.260970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.413 #48 NEW cov: 12379 ft: 15203 corp: 32/145b lim: 10 exec/s: 48 rss: 75Mb L: 3/10 MS: 2 EraseBytes-CrossOver- 00:12:03.413 [2024-10-14 17:29:00.321378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c2c2 cdw11:00000000 00:12:03.413 [2024-10-14 17:29:00.321405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.413 [2024-10-14 17:29:00.321458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c2b1 cdw11:00000000 00:12:03.413 [2024-10-14 17:29:00.321473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:03.413 [2024-10-14 17:29:00.321526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ac2 cdw11:00000000 00:12:03.413 [2024-10-14 17:29:00.321540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:03.413 #49 NEW cov: 12379 ft: 15381 corp: 33/151b lim: 10 exec/s: 49 rss: 75Mb L: 6/10 MS: 1 InsertByte- 00:12:03.413 [2024-10-14 17:29:00.381272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004c30 cdw11:00000000 00:12:03.413 [2024-10-14 17:29:00.381298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.413 #53 NEW cov: 12379 ft: 15389 corp: 34/153b lim: 10 exec/s: 26 rss: 75Mb L: 2/10 MS: 4 EraseBytes-ShuffleBytes-ShuffleBytes-InsertByte- 00:12:03.413 #53 DONE cov: 12379 ft: 15389 corp: 34/153b lim: 10 exec/s: 26 rss: 75Mb 00:12:03.413 Done 53 runs in 2 second(s) 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:03.673 17:29:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:12:03.673 [2024-10-14 17:29:00.579085] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:03.673 [2024-10-14 17:29:00.579155] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106159 ] 00:12:03.932 [2024-10-14 17:29:00.777582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.932 [2024-10-14 17:29:00.817705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.932 [2024-10-14 17:29:00.877282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.932 [2024-10-14 17:29:00.893432] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:12:03.932 INFO: Running with entropic power schedule (0xFF, 100). 00:12:03.932 INFO: Seed: 437244914 00:12:03.932 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:03.932 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:03.932 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:12:03.932 INFO: A corpus is not provided, starting from an empty corpus 00:12:03.932 [2024-10-14 17:29:00.952704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:03.932 [2024-10-14 17:29:00.952734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:03.932 #2 INITED cov: 12180 ft: 12167 corp: 1/1b exec/s: 0 rss: 73Mb 00:12:03.932 [2024-10-14 17:29:00.992692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:03.932 [2024-10-14 17:29:00.992719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.191 #3 NEW cov: 12293 ft: 12611 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 CrossOver- 00:12:04.191 [2024-10-14 17:29:01.053024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.053061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.191 [2024-10-14 17:29:01.053121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.053137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.191 #4 NEW cov: 12299 ft: 13402 corp: 3/4b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:12:04.191 [2024-10-14 17:29:01.112977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.113003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.191 #5 NEW cov: 12384 ft: 13725 corp: 4/5b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 CrossOver- 00:12:04.191 [2024-10-14 17:29:01.153262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.153288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.191 [2024-10-14 17:29:01.153346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.153364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.191 #6 NEW cov: 12384 ft: 13965 corp: 5/7b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:12:04.191 [2024-10-14 17:29:01.213913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.213940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.191 [2024-10-14 17:29:01.214000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.214015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.191 [2024-10-14 17:29:01.214078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.214092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.191 [2024-10-14 17:29:01.214150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.214164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:04.191 [2024-10-14 17:29:01.214221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.214235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:04.191 #7 NEW cov: 12384 ft: 14486 corp: 6/12b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:12:04.191 [2024-10-14 17:29:01.253505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.191 [2024-10-14 17:29:01.253531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.192 [2024-10-14 17:29:01.253589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.192 [2024-10-14 17:29:01.253603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.451 #8 NEW cov: 12384 ft: 14693 corp: 7/14b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:12:04.451 [2024-10-14 17:29:01.313535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.313560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.451 #9 NEW cov: 12384 ft: 14705 corp: 8/15b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:12:04.451 [2024-10-14 17:29:01.353781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.353806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.451 [2024-10-14 17:29:01.353864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.353878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.451 #10 NEW cov: 12384 ft: 14771 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:12:04.451 [2024-10-14 17:29:01.414129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.414154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.451 [2024-10-14 17:29:01.414212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.414226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.451 [2024-10-14 17:29:01.414281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.414295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.451 #11 NEW cov: 12384 ft: 15006 corp: 10/20b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:12:04.451 [2024-10-14 17:29:01.474121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.474146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.451 [2024-10-14 17:29:01.474205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.474220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.451 #12 NEW cov: 12384 ft: 15087 corp: 11/22b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:12:04.451 [2024-10-14 17:29:01.514118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.451 [2024-10-14 17:29:01.514143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.710 #13 NEW cov: 12384 ft: 15141 corp: 12/23b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:12:04.710 [2024-10-14 17:29:01.574411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.574436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.710 #14 NEW cov: 12384 ft: 15174 corp: 13/24b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:12:04.710 [2024-10-14 17:29:01.614532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.614557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.614617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.614631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.710 #15 NEW cov: 12384 ft: 15236 corp: 14/26b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:12:04.710 [2024-10-14 17:29:01.654812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.654838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.654901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.654915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.654975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.654989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.710 #16 NEW cov: 12384 ft: 15258 corp: 15/29b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:12:04.710 [2024-10-14 17:29:01.715308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.715334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.715392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.715407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.715465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.715478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.715536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.715549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.715607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.715621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:04.710 #17 NEW cov: 12384 ft: 15309 corp: 16/34b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:12:04.710 [2024-10-14 17:29:01.775129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.775155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.775217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.775232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.710 [2024-10-14 17:29:01.775292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.710 [2024-10-14 17:29:01.775306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.969 #18 NEW cov: 12384 ft: 15315 corp: 17/37b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 ChangeByte- 00:12:04.969 [2024-10-14 17:29:01.835445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.969 [2024-10-14 17:29:01.835470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:04.969 [2024-10-14 17:29:01.835534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.969 [2024-10-14 17:29:01.835548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:04.969 [2024-10-14 17:29:01.835606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.969 [2024-10-14 17:29:01.835619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:04.969 [2024-10-14 17:29:01.835678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:04.969 [2024-10-14 17:29:01.835691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.229 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:05.229 #19 NEW cov: 12407 ft: 15377 corp: 18/41b lim: 5 exec/s: 19 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:12:05.229 [2024-10-14 17:29:02.156735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.156795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.156878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.156906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.156985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.157011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.157101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.157127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.157206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.157231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.229 #20 NEW cov: 12407 ft: 15550 corp: 19/46b lim: 5 exec/s: 20 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:12:05.229 [2024-10-14 17:29:02.225920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.225947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.229 #21 NEW cov: 12407 ft: 15588 corp: 20/47b lim: 5 exec/s: 21 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:12:05.229 [2024-10-14 17:29:02.266703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.266730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.266787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.266806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.266861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.266875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.266929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.266944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.229 [2024-10-14 17:29:02.266997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.229 [2024-10-14 17:29:02.267011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.229 #22 NEW cov: 12407 ft: 15601 corp: 21/52b lim: 5 exec/s: 22 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:12:05.489 [2024-10-14 17:29:02.326815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.326842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.326899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.326914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.326971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.326985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.327039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.327053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.327123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.327137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.489 #23 NEW cov: 12407 ft: 15664 corp: 22/57b lim: 5 exec/s: 23 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:12:05.489 [2024-10-14 17:29:02.366325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.366350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.489 #24 NEW cov: 12407 ft: 15680 corp: 23/58b lim: 5 exec/s: 24 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:12:05.489 [2024-10-14 17:29:02.407010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.407040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.407101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.407115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.407169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.407182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.407237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.407251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.407304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.407318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.489 #25 NEW cov: 12407 ft: 15682 corp: 24/63b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:12:05.489 [2024-10-14 17:29:02.466575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.466601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.489 #26 NEW cov: 12407 ft: 15694 corp: 25/64b lim: 5 exec/s: 26 rss: 75Mb L: 1/5 MS: 1 CrossOver- 00:12:05.489 [2024-10-14 17:29:02.527410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.527435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.527493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.527507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.527561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.527575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.527628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.527642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.527696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.527709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.489 #27 NEW cov: 12407 ft: 15700 corp: 26/69b lim: 5 exec/s: 27 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:12:05.489 [2024-10-14 17:29:02.567519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.567544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.567604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.567618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.567674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.567688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.567742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.567755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.489 [2024-10-14 17:29:02.567810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.489 [2024-10-14 17:29:02.567824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.748 #28 NEW cov: 12407 ft: 15713 corp: 27/74b lim: 5 exec/s: 28 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:12:05.748 [2024-10-14 17:29:02.607634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.748 [2024-10-14 17:29:02.607658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.748 [2024-10-14 17:29:02.607713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.607727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.607781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.607794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.607849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.607862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.607916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.607929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.749 #29 NEW cov: 12407 ft: 15762 corp: 28/79b lim: 5 exec/s: 29 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:12:05.749 [2024-10-14 17:29:02.647713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.647738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.647794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.647808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.647866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.647880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.647936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.647949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.648004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.648017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:05.749 #30 NEW cov: 12407 ft: 15779 corp: 29/84b lim: 5 exec/s: 30 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:12:05.749 [2024-10-14 17:29:02.707765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.707790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.707845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.707859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.707916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.707929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.707984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.707997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:05.749 #31 NEW cov: 12407 ft: 15796 corp: 30/88b lim: 5 exec/s: 31 rss: 75Mb L: 4/5 MS: 1 ChangeBit- 00:12:05.749 [2024-10-14 17:29:02.767786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.767811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.767869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.767883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.767941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.767954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.749 #32 NEW cov: 12407 ft: 15805 corp: 31/91b lim: 5 exec/s: 32 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:12:05.749 [2024-10-14 17:29:02.807892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.807922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.807980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.807994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:05.749 [2024-10-14 17:29:02.808055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:05.749 [2024-10-14 17:29:02.808069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:05.749 #33 NEW cov: 12407 ft: 15811 corp: 32/94b lim: 5 exec/s: 33 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:12:06.009 [2024-10-14 17:29:02.848297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.848322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.848377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.848391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.848444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.848459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.848515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.848528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.848584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.848597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:06.009 #34 NEW cov: 12407 ft: 15837 corp: 33/99b lim: 5 exec/s: 34 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:12:06.009 [2024-10-14 17:29:02.887955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.887980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.888038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.888053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:06.009 #35 NEW cov: 12407 ft: 15860 corp: 34/101b lim: 5 exec/s: 35 rss: 75Mb L: 2/5 MS: 1 ChangeBinInt- 00:12:06.009 [2024-10-14 17:29:02.928532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.928557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.928613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.928630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.928686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.928700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.928754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.928766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:06.009 [2024-10-14 17:29:02.928821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.009 [2024-10-14 17:29:02.928834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:06.009 #36 NEW cov: 12407 ft: 15871 corp: 35/106b lim: 5 exec/s: 18 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:12:06.009 #36 DONE cov: 12407 ft: 15871 corp: 35/106b lim: 5 exec/s: 18 rss: 75Mb 00:12:06.009 Done 36 runs in 2 second(s) 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:06.009 17:29:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:12:06.269 [2024-10-14 17:29:03.122202] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:06.269 [2024-10-14 17:29:03.122271] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106521 ] 00:12:06.269 [2024-10-14 17:29:03.318202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.269 [2024-10-14 17:29:03.356463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.528 [2024-10-14 17:29:03.415421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.528 [2024-10-14 17:29:03.431576] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:12:06.528 INFO: Running with entropic power schedule (0xFF, 100). 00:12:06.528 INFO: Seed: 2976221823 00:12:06.528 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:06.528 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:06.528 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:12:06.528 INFO: A corpus is not provided, starting from an empty corpus 00:12:06.528 [2024-10-14 17:29:03.487158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.487188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.528 #2 INITED cov: 12179 ft: 12147 corp: 1/1b exec/s: 0 rss: 73Mb 00:12:06.528 [2024-10-14 17:29:03.527180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.527207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.528 #3 NEW cov: 12293 ft: 12766 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ShuffleBytes- 00:12:06.528 [2024-10-14 17:29:03.588042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.588069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.528 [2024-10-14 17:29:03.588131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.588146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:06.528 [2024-10-14 17:29:03.588204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.588218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:06.528 [2024-10-14 17:29:03.588274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.588288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:06.528 [2024-10-14 17:29:03.588350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.528 [2024-10-14 17:29:03.588364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:06.528 #4 NEW cov: 12299 ft: 13918 corp: 3/7b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:12:06.787 [2024-10-14 17:29:03.627487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.627514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.787 #5 NEW cov: 12384 ft: 14221 corp: 4/8b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:12:06.787 [2024-10-14 17:29:03.667532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.667561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.787 #6 NEW cov: 12384 ft: 14403 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:12:06.787 [2024-10-14 17:29:03.727702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.727728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.787 #7 NEW cov: 12384 ft: 14453 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:12:06.787 [2024-10-14 17:29:03.787898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.787926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.787 #8 NEW cov: 12384 ft: 14534 corp: 7/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:12:06.787 [2024-10-14 17:29:03.828667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.828693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:06.787 [2024-10-14 17:29:03.828749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.828763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:06.787 [2024-10-14 17:29:03.828819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.828833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:06.787 [2024-10-14 17:29:03.828889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.828902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:06.787 [2024-10-14 17:29:03.828961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:06.787 [2024-10-14 17:29:03.828975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:06.787 #9 NEW cov: 12384 ft: 14583 corp: 8/16b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:12:07.046 [2024-10-14 17:29:03.888172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.046 [2024-10-14 17:29:03.888198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.046 #10 NEW cov: 12384 ft: 14598 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:12:07.046 [2024-10-14 17:29:03.928313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:03.928340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.047 #11 NEW cov: 12384 ft: 14666 corp: 10/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:12:07.047 [2024-10-14 17:29:03.968851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:03.968880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:03.968940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:03.968955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:03.969013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:03.969032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:03.969087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:03.969101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:07.047 #12 NEW cov: 12384 ft: 14751 corp: 11/22b lim: 5 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:12:07.047 [2024-10-14 17:29:04.029213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.029240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:04.029297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.029312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:04.029365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.029380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:04.029437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.029451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:04.029507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.029521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:07.047 #13 NEW cov: 12384 ft: 14813 corp: 12/27b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377\000"- 00:12:07.047 [2024-10-14 17:29:04.068671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.068697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.047 #14 NEW cov: 12384 ft: 14829 corp: 13/28b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:12:07.047 [2024-10-14 17:29:04.108976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.109002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.047 [2024-10-14 17:29:04.109064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.047 [2024-10-14 17:29:04.109078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.047 #15 NEW cov: 12384 ft: 15025 corp: 14/30b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 CrossOver- 00:12:07.306 [2024-10-14 17:29:04.148893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.148921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.306 #16 NEW cov: 12384 ft: 15070 corp: 15/31b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:12:07.306 [2024-10-14 17:29:04.209741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.209768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.306 [2024-10-14 17:29:04.209823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.209837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.306 [2024-10-14 17:29:04.209891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.209905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.306 [2024-10-14 17:29:04.209961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.209975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:07.306 [2024-10-14 17:29:04.210032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.210045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:07.306 #17 NEW cov: 12384 ft: 15105 corp: 16/36b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:12:07.306 [2024-10-14 17:29:04.249532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.249558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.306 [2024-10-14 17:29:04.249616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.249631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.306 [2024-10-14 17:29:04.249689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.249703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.306 #18 NEW cov: 12384 ft: 15296 corp: 17/39b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:12:07.306 [2024-10-14 17:29:04.309356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.309388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.306 #19 NEW cov: 12384 ft: 15304 corp: 18/40b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:12:07.306 [2024-10-14 17:29:04.369982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.306 [2024-10-14 17:29:04.370008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.307 [2024-10-14 17:29:04.370071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.307 [2024-10-14 17:29:04.370086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.307 [2024-10-14 17:29:04.370143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.307 [2024-10-14 17:29:04.370157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.307 [2024-10-14 17:29:04.370214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.307 [2024-10-14 17:29:04.370228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:07.824 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:07.824 #20 NEW cov: 12407 ft: 15354 corp: 19/44b lim: 5 exec/s: 20 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:12:07.824 [2024-10-14 17:29:04.710566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.824 [2024-10-14 17:29:04.710626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.825 #21 NEW cov: 12407 ft: 15527 corp: 20/45b lim: 5 exec/s: 21 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:12:07.825 [2024-10-14 17:29:04.760768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.760796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.760854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.760868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.760922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.760935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.825 #22 NEW cov: 12407 ft: 15579 corp: 21/48b lim: 5 exec/s: 22 rss: 75Mb L: 3/5 MS: 1 InsertByte- 00:12:07.825 [2024-10-14 17:29:04.801179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.801207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.801263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.801280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.801333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.801346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.801400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.801413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.801468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.801481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:07.825 #23 NEW cov: 12407 ft: 15592 corp: 22/53b lim: 5 exec/s: 23 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:12:07.825 [2024-10-14 17:29:04.840631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.840657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.825 #24 NEW cov: 12407 ft: 15657 corp: 23/54b lim: 5 exec/s: 24 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:12:07.825 [2024-10-14 17:29:04.881077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.881103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.881159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.881173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:07.825 [2024-10-14 17:29:04.881228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:07.825 [2024-10-14 17:29:04.881241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:07.825 #25 NEW cov: 12407 ft: 15672 corp: 24/57b lim: 5 exec/s: 25 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:12:08.084 [2024-10-14 17:29:04.920890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:04.920915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.084 #26 NEW cov: 12407 ft: 15691 corp: 25/58b lim: 5 exec/s: 26 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:12:08.084 [2024-10-14 17:29:04.981385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:04.981412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.084 [2024-10-14 17:29:04.981470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:04.981485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.084 [2024-10-14 17:29:04.981537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:04.981555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:08.084 #27 NEW cov: 12407 ft: 15696 corp: 26/61b lim: 5 exec/s: 27 rss: 75Mb L: 3/5 MS: 1 EraseBytes- 00:12:08.084 [2024-10-14 17:29:05.021507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:05.021533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.084 [2024-10-14 17:29:05.021591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:05.021605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.084 [2024-10-14 17:29:05.021660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.084 [2024-10-14 17:29:05.021673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:08.084 #28 NEW cov: 12407 ft: 15738 corp: 27/64b lim: 5 exec/s: 28 rss: 75Mb L: 3/5 MS: 1 CopyPart- 00:12:08.084 [2024-10-14 17:29:05.081341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.085 [2024-10-14 17:29:05.081366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.085 #29 NEW cov: 12407 ft: 15767 corp: 28/65b lim: 5 exec/s: 29 rss: 75Mb L: 1/5 MS: 1 CopyPart- 00:12:08.085 [2024-10-14 17:29:05.121591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.085 [2024-10-14 17:29:05.121617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.085 [2024-10-14 17:29:05.121672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.085 [2024-10-14 17:29:05.121686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.085 #30 NEW cov: 12407 ft: 15772 corp: 29/67b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:12:08.085 [2024-10-14 17:29:05.161698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.085 [2024-10-14 17:29:05.161724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.085 [2024-10-14 17:29:05.161782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.085 [2024-10-14 17:29:05.161796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.343 #31 NEW cov: 12407 ft: 15791 corp: 30/69b lim: 5 exec/s: 31 rss: 75Mb L: 2/5 MS: 1 CrossOver- 00:12:08.343 [2024-10-14 17:29:05.202159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.343 [2024-10-14 17:29:05.202186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.202240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.202257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.202311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.202325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.202378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.202391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:08.344 #32 NEW cov: 12407 ft: 15804 corp: 31/73b lim: 5 exec/s: 32 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:12:08.344 [2024-10-14 17:29:05.241814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.241840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.344 #33 NEW cov: 12407 ft: 15813 corp: 32/74b lim: 5 exec/s: 33 rss: 75Mb L: 1/5 MS: 1 CopyPart- 00:12:08.344 [2024-10-14 17:29:05.302575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.302601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.302659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.302673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.302727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.302740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.302794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.302806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:08.344 [2024-10-14 17:29:05.302860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.302874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:08.344 #34 NEW cov: 12407 ft: 15828 corp: 33/79b lim: 5 exec/s: 34 rss: 75Mb L: 5/5 MS: 1 CopyPart- 00:12:08.344 [2024-10-14 17:29:05.362171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.362198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.344 #35 NEW cov: 12407 ft: 15843 corp: 34/80b lim: 5 exec/s: 35 rss: 75Mb L: 1/5 MS: 1 ShuffleBytes- 00:12:08.344 [2024-10-14 17:29:05.422299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.344 [2024-10-14 17:29:05.422324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.603 #36 NEW cov: 12407 ft: 15859 corp: 35/81b lim: 5 exec/s: 36 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:12:08.603 [2024-10-14 17:29:05.482926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.603 [2024-10-14 17:29:05.482952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:08.603 [2024-10-14 17:29:05.483012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.603 [2024-10-14 17:29:05.483031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:08.603 [2024-10-14 17:29:05.483086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.603 [2024-10-14 17:29:05.483099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:08.603 [2024-10-14 17:29:05.483152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:08.603 [2024-10-14 17:29:05.483166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:08.603 #37 NEW cov: 12407 ft: 15879 corp: 36/85b lim: 5 exec/s: 18 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:12:08.603 #37 DONE cov: 12407 ft: 15879 corp: 36/85b lim: 5 exec/s: 18 rss: 75Mb 00:12:08.603 ###### Recommended dictionary. ###### 00:12:08.603 "\377\377\377\000" # Uses: 0 00:12:08.603 ###### End of recommended dictionary. ###### 00:12:08.603 Done 37 runs in 2 second(s) 00:12:08.603 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:12:08.603 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:08.603 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:08.603 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:12:08.603 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:12:08.603 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:08.604 17:29:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:12:08.604 [2024-10-14 17:29:05.656711] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:08.604 [2024-10-14 17:29:05.656783] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106875 ] 00:12:08.863 [2024-10-14 17:29:05.846777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.863 [2024-10-14 17:29:05.885255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.863 [2024-10-14 17:29:05.944173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.121 [2024-10-14 17:29:05.960338] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:12:09.121 INFO: Running with entropic power schedule (0xFF, 100). 00:12:09.121 INFO: Seed: 1211269671 00:12:09.121 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:09.121 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:09.121 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:12:09.121 INFO: A corpus is not provided, starting from an empty corpus 00:12:09.121 #2 INITED exec/s: 0 rss: 66Mb 00:12:09.121 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:09.121 This may also happen if the target rejected all inputs we tried so far 00:12:09.121 [2024-10-14 17:29:06.026187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.121 [2024-10-14 17:29:06.026217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.121 [2024-10-14 17:29:06.026276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.121 [2024-10-14 17:29:06.026291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.121 [2024-10-14 17:29:06.026346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.121 [2024-10-14 17:29:06.026360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.121 [2024-10-14 17:29:06.026417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.121 [2024-10-14 17:29:06.026430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.380 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:12:09.380 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:09.380 #4 NEW cov: 12174 ft: 12186 corp: 2/39b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 2 CopyPart-InsertRepeatedBytes- 00:12:09.380 [2024-10-14 17:29:06.367261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.367319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.380 [2024-10-14 17:29:06.367405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.367431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.380 [2024-10-14 17:29:06.367513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.367545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.380 [2024-10-14 17:29:06.367626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.367651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.380 #8 NEW cov: 12315 ft: 12805 corp: 3/76b lim: 40 exec/s: 0 rss: 74Mb L: 37/38 MS: 4 ChangeBit-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:12:09.380 [2024-10-14 17:29:06.417079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.417105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.380 [2024-10-14 17:29:06.417163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.417178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.380 [2024-10-14 17:29:06.417246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.417259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.380 [2024-10-14 17:29:06.417314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.380 [2024-10-14 17:29:06.417327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.380 #9 NEW cov: 12321 ft: 13077 corp: 4/114b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ChangeBinInt- 00:12:09.640 [2024-10-14 17:29:06.477273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.477302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.477357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.477371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.477424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.477438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.477491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.477504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.640 #11 NEW cov: 12406 ft: 13368 corp: 5/147b lim: 40 exec/s: 0 rss: 74Mb L: 33/38 MS: 2 ChangeBit-CrossOver- 00:12:09.640 [2024-10-14 17:29:06.517311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.517337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.517396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.517410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.517466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.517479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.517533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.517545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.640 #12 NEW cov: 12406 ft: 13481 corp: 6/184b lim: 40 exec/s: 0 rss: 74Mb L: 37/38 MS: 1 ChangeByte- 00:12:09.640 [2024-10-14 17:29:06.577487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.577514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.577572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.577587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.577641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.577655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.577711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.577725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.640 #13 NEW cov: 12406 ft: 13538 corp: 7/222b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ShuffleBytes- 00:12:09.640 [2024-10-14 17:29:06.617642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.617670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.617726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.617740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.617794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:26ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.617808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.617860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.617874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.640 #14 NEW cov: 12406 ft: 13659 corp: 8/255b lim: 40 exec/s: 0 rss: 74Mb L: 33/38 MS: 1 CopyPart- 00:12:09.640 [2024-10-14 17:29:06.677853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affefff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.677880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.677937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.677951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.678007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.678021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.640 [2024-10-14 17:29:06.678082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.640 [2024-10-14 17:29:06.678096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.640 #15 NEW cov: 12406 ft: 13696 corp: 9/293b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ChangeBit- 00:12:09.900 [2024-10-14 17:29:06.738127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.738155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.738211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.738225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.738280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.738294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.738349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.738363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.738417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:babababa cdw11:bababa02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.738430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:09.900 #16 NEW cov: 12406 ft: 13770 corp: 10/333b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:12:09.900 [2024-10-14 17:29:06.798082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.798109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.798165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.798179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.798239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.798252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.798308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:baba00ba cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.798322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.900 #17 NEW cov: 12406 ft: 13799 corp: 11/370b lim: 40 exec/s: 0 rss: 74Mb L: 37/40 MS: 1 ChangeByte- 00:12:09.900 [2024-10-14 17:29:06.838265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.838291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.838350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:dfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.838364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.838421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.838434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.900 [2024-10-14 17:29:06.838487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.838500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.900 #18 NEW cov: 12406 ft: 13810 corp: 12/403b lim: 40 exec/s: 0 rss: 74Mb L: 33/40 MS: 1 ChangeBit- 00:12:09.900 [2024-10-14 17:29:06.878462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.900 [2024-10-14 17:29:06.878488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.878544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.878558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.878612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.878626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.878680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.878694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.878749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.878762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:09.901 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:09.901 #19 NEW cov: 12429 ft: 13872 corp: 13/443b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:12:09.901 [2024-10-14 17:29:06.938513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.938539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.938593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.938607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.938662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.938675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.938729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.938742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:09.901 #20 NEW cov: 12429 ft: 13916 corp: 14/475b lim: 40 exec/s: 0 rss: 74Mb L: 32/40 MS: 1 EraseBytes- 00:12:09.901 [2024-10-14 17:29:06.978624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.978649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.978705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.978719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.978775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.978788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:09.901 [2024-10-14 17:29:06.978843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:09.901 [2024-10-14 17:29:06.978856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.164 #21 NEW cov: 12429 ft: 13948 corp: 15/507b lim: 40 exec/s: 21 rss: 74Mb L: 32/40 MS: 1 ShuffleBytes- 00:12:10.164 [2024-10-14 17:29:07.038790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.038816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.038872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.038886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.038943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffff2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.038959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.039015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.039032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.164 #22 NEW cov: 12429 ft: 14031 corp: 16/541b lim: 40 exec/s: 22 rss: 74Mb L: 34/40 MS: 1 InsertByte- 00:12:10.164 [2024-10-14 17:29:07.078841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.078866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.078919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.078933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.078989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:26ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.079002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.079075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.079099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.164 #23 NEW cov: 12429 ft: 14050 corp: 17/574b lim: 40 exec/s: 23 rss: 74Mb L: 33/40 MS: 1 ShuffleBytes- 00:12:10.164 [2024-10-14 17:29:07.118961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.118986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.119044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.119075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.119138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.119151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.119207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.119220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.164 #24 NEW cov: 12429 ft: 14062 corp: 18/613b lim: 40 exec/s: 24 rss: 74Mb L: 39/40 MS: 1 CopyPart- 00:12:10.164 [2024-10-14 17:29:07.159083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.159108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.159164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.159182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.159237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.159252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.159305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffff5cff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.159318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.164 #25 NEW cov: 12429 ft: 14122 corp: 19/645b lim: 40 exec/s: 25 rss: 74Mb L: 32/40 MS: 1 ChangeByte- 00:12:10.164 [2024-10-14 17:29:07.219458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.219484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.219542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.164 [2024-10-14 17:29:07.219557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.164 [2024-10-14 17:29:07.219611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.165 [2024-10-14 17:29:07.219624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.165 [2024-10-14 17:29:07.219682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.165 [2024-10-14 17:29:07.219696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.165 [2024-10-14 17:29:07.219752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffff00 cdw11:ffffff4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.165 [2024-10-14 17:29:07.219766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:10.483 #26 NEW cov: 12429 ft: 14149 corp: 20/685b lim: 40 exec/s: 26 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:12:10.483 [2024-10-14 17:29:07.279479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.483 [2024-10-14 17:29:07.279507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.483 [2024-10-14 17:29:07.279566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.483 [2024-10-14 17:29:07.279580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.483 [2024-10-14 17:29:07.279634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.483 [2024-10-14 17:29:07.279648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.279703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.279720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.484 #27 NEW cov: 12429 ft: 14180 corp: 21/723b lim: 40 exec/s: 27 rss: 75Mb L: 38/40 MS: 1 ChangeBinInt- 00:12:10.484 [2024-10-14 17:29:07.319536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.319562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.319621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.319635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.319690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.319704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.319758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.319772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.484 #28 NEW cov: 12429 ft: 14258 corp: 22/760b lim: 40 exec/s: 28 rss: 75Mb L: 37/40 MS: 1 ChangeBit- 00:12:10.484 [2024-10-14 17:29:07.359527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.359553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.359610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.359624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.359682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.359696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.484 #29 NEW cov: 12429 ft: 14761 corp: 23/786b lim: 40 exec/s: 29 rss: 75Mb L: 26/40 MS: 1 EraseBytes- 00:12:10.484 [2024-10-14 17:29:07.419802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.419827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.419886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:21bababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.419899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.419955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.419970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.420030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.420047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.484 #30 NEW cov: 12429 ft: 14768 corp: 24/823b lim: 40 exec/s: 30 rss: 75Mb L: 37/40 MS: 1 ChangeByte- 00:12:10.484 [2024-10-14 17:29:07.479963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.479988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.480062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:dfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.480077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.480141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.480154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.480211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.480224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.484 #31 NEW cov: 12429 ft: 14778 corp: 25/857b lim: 40 exec/s: 31 rss: 75Mb L: 34/40 MS: 1 InsertByte- 00:12:10.484 [2024-10-14 17:29:07.540149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.540174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.540232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:21bababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.540246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.540303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:ba7fbaba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.540316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.484 [2024-10-14 17:29:07.540372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.484 [2024-10-14 17:29:07.540386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.805 #32 NEW cov: 12429 ft: 14792 corp: 26/894b lim: 40 exec/s: 32 rss: 75Mb L: 37/40 MS: 1 ShuffleBytes- 00:12:10.805 [2024-10-14 17:29:07.600351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.600377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.600436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.600450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.600505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.600521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.600580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babab9ba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.600593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.805 #33 NEW cov: 12429 ft: 14852 corp: 27/932b lim: 40 exec/s: 33 rss: 75Mb L: 38/40 MS: 1 InsertByte- 00:12:10.805 [2024-10-14 17:29:07.640421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:f7ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.640447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.640503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.640518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.640572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.640585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.640644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.640658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.805 #34 NEW cov: 12429 ft: 14877 corp: 28/965b lim: 40 exec/s: 34 rss: 75Mb L: 33/40 MS: 1 ChangeBit- 00:12:10.805 [2024-10-14 17:29:07.680546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affefff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.680572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.680632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f7ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.680646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.680700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.680714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.680769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.680782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.805 #35 NEW cov: 12429 ft: 14897 corp: 29/1003b lim: 40 exec/s: 35 rss: 75Mb L: 38/40 MS: 1 ChangeBit- 00:12:10.805 [2024-10-14 17:29:07.740695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.740721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.740786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.740799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.740857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:262effff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.740871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.740928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.740942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.805 #36 NEW cov: 12429 ft: 14902 corp: 30/1037b lim: 40 exec/s: 36 rss: 75Mb L: 34/40 MS: 1 InsertByte- 00:12:10.805 [2024-10-14 17:29:07.800910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.800936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.800996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.801010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.801083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.801097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.805 [2024-10-14 17:29:07.801156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:2fffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.805 [2024-10-14 17:29:07.801171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:10.806 #37 NEW cov: 12429 ft: 14907 corp: 31/1071b lim: 40 exec/s: 37 rss: 75Mb L: 34/40 MS: 1 ShuffleBytes- 00:12:10.806 [2024-10-14 17:29:07.841017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:babababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.806 [2024-10-14 17:29:07.841048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:10.806 [2024-10-14 17:29:07.841108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:21bababa cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.806 [2024-10-14 17:29:07.841122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:10.806 [2024-10-14 17:29:07.841177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:7fbababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.806 [2024-10-14 17:29:07.841192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:10.806 [2024-10-14 17:29:07.841250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:bab2baba cdw11:babababa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:10.806 [2024-10-14 17:29:07.841264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:11.184 #38 NEW cov: 12429 ft: 14923 corp: 32/1108b lim: 40 exec/s: 38 rss: 75Mb L: 37/40 MS: 1 ChangeBit- 00:12:11.184 [2024-10-14 17:29:07.881159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affefff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.881186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.881247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:f7ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.881261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.881317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff26 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.881331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.881392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff01 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.881407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:11.184 #39 NEW cov: 12429 ft: 14935 corp: 33/1146b lim: 40 exec/s: 39 rss: 76Mb L: 38/40 MS: 1 CMP- DE: "\001\000\000\000"- 00:12:11.184 [2024-10-14 17:29:07.941296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.941323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.941380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.941394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.941450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:26ffffff cdw11:ffff2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.941465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.941521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.941535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:11.184 #40 NEW cov: 12429 ft: 14939 corp: 34/1180b lim: 40 exec/s: 40 rss: 76Mb L: 34/40 MS: 1 CrossOver- 00:12:11.184 [2024-10-14 17:29:07.981282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.981309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.981367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff2fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.981381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:11.184 [2024-10-14 17:29:07.981439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.184 [2024-10-14 17:29:07.981453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:11.184 #41 NEW cov: 12429 ft: 15013 corp: 35/1206b lim: 40 exec/s: 20 rss: 76Mb L: 26/40 MS: 1 ShuffleBytes- 00:12:11.184 #41 DONE cov: 12429 ft: 15013 corp: 35/1206b lim: 40 exec/s: 20 rss: 76Mb 00:12:11.184 ###### Recommended dictionary. ###### 00:12:11.184 "\001\000\000\000" # Uses: 0 00:12:11.184 ###### End of recommended dictionary. ###### 00:12:11.184 Done 41 runs in 2 second(s) 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:11.184 17:29:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:12:11.184 [2024-10-14 17:29:08.175010] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:11.184 [2024-10-14 17:29:08.175107] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107239 ] 00:12:11.443 [2024-10-14 17:29:08.366992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.443 [2024-10-14 17:29:08.405270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.443 [2024-10-14 17:29:08.464213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.443 [2024-10-14 17:29:08.480352] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:12:11.443 INFO: Running with entropic power schedule (0xFF, 100). 00:12:11.443 INFO: Seed: 3729253484 00:12:11.443 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:11.443 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:11.443 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:12:11.443 INFO: A corpus is not provided, starting from an empty corpus 00:12:11.443 #2 INITED exec/s: 0 rss: 66Mb 00:12:11.443 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:11.443 This may also happen if the target rejected all inputs we tried so far 00:12:11.702 [2024-10-14 17:29:08.539665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.702 [2024-10-14 17:29:08.539694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.961 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:12:11.961 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:11.961 #9 NEW cov: 12215 ft: 12204 corp: 2/10b lim: 40 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 ChangeBit-CMP- DE: "\010\000\000\000\000\000\000\000"- 00:12:11.961 [2024-10-14 17:29:08.880686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:08.880743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.961 #10 NEW cov: 12328 ft: 12971 corp: 3/19b lim: 40 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 PersAutoDict- DE: "\010\000\000\000\000\000\000\000"- 00:12:11.961 [2024-10-14 17:29:08.930713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:08.930740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.961 [2024-10-14 17:29:08.930801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:08.930816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:11.961 #11 NEW cov: 12334 ft: 13857 corp: 4/39b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:12:11.961 [2024-10-14 17:29:08.991115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:08.991142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:11.961 [2024-10-14 17:29:08.991208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:08.991223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:11.961 [2024-10-14 17:29:08.991285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:160a0800 cdw11:00001616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:08.991299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:11.961 #12 NEW cov: 12419 ft: 14442 corp: 5/68b lim: 40 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 CrossOver- 00:12:11.961 [2024-10-14 17:29:09.050916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:11.961 [2024-10-14 17:29:09.050944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.221 #13 NEW cov: 12419 ft: 14616 corp: 6/77b lim: 40 exec/s: 0 rss: 74Mb L: 9/29 MS: 1 ChangeBit- 00:12:12.221 [2024-10-14 17:29:09.111225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00e50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.111251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.221 [2024-10-14 17:29:09.111313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00001616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.111336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.221 #14 NEW cov: 12419 ft: 14697 corp: 7/98b lim: 40 exec/s: 0 rss: 74Mb L: 21/29 MS: 1 InsertByte- 00:12:12.221 [2024-10-14 17:29:09.151144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.151171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.221 #16 NEW cov: 12419 ft: 14771 corp: 8/113b lim: 40 exec/s: 0 rss: 74Mb L: 15/29 MS: 2 EraseBytes-PersAutoDict- DE: "\010\000\000\000\000\000\000\000"- 00:12:12.221 [2024-10-14 17:29:09.191297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08030000 cdw11:0000fcff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.191323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.221 #17 NEW cov: 12419 ft: 14831 corp: 9/122b lim: 40 exec/s: 0 rss: 74Mb L: 9/29 MS: 1 ChangeBinInt- 00:12:12.221 [2024-10-14 17:29:09.251802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.251828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.221 [2024-10-14 17:29:09.251892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01010101 cdw11:01010101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.251906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.221 [2024-10-14 17:29:09.251965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:01010101 cdw11:01010101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.251979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:12.221 #18 NEW cov: 12419 ft: 14852 corp: 10/147b lim: 40 exec/s: 0 rss: 74Mb L: 25/29 MS: 1 InsertRepeatedBytes- 00:12:12.221 [2024-10-14 17:29:09.291575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08030000 cdw11:0000ecff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.221 [2024-10-14 17:29:09.291601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.480 #19 NEW cov: 12419 ft: 14922 corp: 11/156b lim: 40 exec/s: 0 rss: 74Mb L: 9/29 MS: 1 ChangeBit- 00:12:12.480 [2024-10-14 17:29:09.352068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.352094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.480 [2024-10-14 17:29:09.352155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.352169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.480 [2024-10-14 17:29:09.352226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16000016 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.352240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:12.480 #20 NEW cov: 12419 ft: 14991 corp: 12/186b lim: 40 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:12:12.480 [2024-10-14 17:29:09.392014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.392051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.480 [2024-10-14 17:29:09.392114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00080000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.392128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.480 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:12.480 #21 NEW cov: 12442 ft: 15103 corp: 13/204b lim: 40 exec/s: 0 rss: 75Mb L: 18/30 MS: 1 CrossOver- 00:12:12.480 [2024-10-14 17:29:09.432344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:10000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.432370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.480 [2024-10-14 17:29:09.432435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01010101 cdw11:01010101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.432450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.480 [2024-10-14 17:29:09.432510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:01010101 cdw11:01010101 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.432524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:12.480 #22 NEW cov: 12442 ft: 15116 corp: 14/229b lim: 40 exec/s: 0 rss: 75Mb L: 25/30 MS: 1 ChangeBinInt- 00:12:12.480 [2024-10-14 17:29:09.492141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:000a0008 cdw11:0008ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.480 [2024-10-14 17:29:09.492167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.480 #25 NEW cov: 12442 ft: 15127 corp: 15/240b lim: 40 exec/s: 0 rss: 75Mb L: 11/30 MS: 3 CrossOver-ShuffleBytes-CMP- DE: "\377\377\001\000"- 00:12:12.480 [2024-10-14 17:29:09.532239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.481 [2024-10-14 17:29:09.532264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.739 #26 NEW cov: 12442 ft: 15140 corp: 16/255b lim: 40 exec/s: 26 rss: 75Mb L: 15/30 MS: 1 CopyPart- 00:12:12.739 [2024-10-14 17:29:09.592419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08fe0000 cdw11:0000fcff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.739 [2024-10-14 17:29:09.592445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.739 #27 NEW cov: 12442 ft: 15219 corp: 17/264b lim: 40 exec/s: 27 rss: 75Mb L: 9/30 MS: 1 ChangeBinInt- 00:12:12.739 [2024-10-14 17:29:09.632561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a08fc00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.739 [2024-10-14 17:29:09.632586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.739 #28 NEW cov: 12442 ft: 15235 corp: 18/273b lim: 40 exec/s: 28 rss: 75Mb L: 9/30 MS: 1 ChangeBinInt- 00:12:12.739 [2024-10-14 17:29:09.672688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000016 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.739 [2024-10-14 17:29:09.672714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.739 #29 NEW cov: 12442 ft: 15242 corp: 19/287b lim: 40 exec/s: 29 rss: 75Mb L: 14/30 MS: 1 EraseBytes- 00:12:12.739 [2024-10-14 17:29:09.712940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.739 [2024-10-14 17:29:09.712966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.739 [2024-10-14 17:29:09.713033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00080000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.739 [2024-10-14 17:29:09.713048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.739 #30 NEW cov: 12442 ft: 15276 corp: 20/305b lim: 40 exec/s: 30 rss: 75Mb L: 18/30 MS: 1 ShuffleBytes- 00:12:12.739 [2024-10-14 17:29:09.773113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.739 [2024-10-14 17:29:09.773139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.739 [2024-10-14 17:29:09.773201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00080000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.740 [2024-10-14 17:29:09.773215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.740 #31 NEW cov: 12442 ft: 15302 corp: 21/323b lim: 40 exec/s: 31 rss: 75Mb L: 18/30 MS: 1 ChangeByte- 00:12:12.740 [2024-10-14 17:29:09.813044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000420 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.740 [2024-10-14 17:29:09.813070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.999 #32 NEW cov: 12442 ft: 15334 corp: 22/332b lim: 40 exec/s: 32 rss: 75Mb L: 9/30 MS: 1 ChangeBit- 00:12:12.999 [2024-10-14 17:29:09.853350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:09.853376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.999 [2024-10-14 17:29:09.853435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:40000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:09.853449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.999 #33 NEW cov: 12442 ft: 15356 corp: 23/350b lim: 40 exec/s: 33 rss: 75Mb L: 18/30 MS: 1 CopyPart- 00:12:12.999 [2024-10-14 17:29:09.913291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08030000 cdw11:d000fcff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:09.913318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.999 #34 NEW cov: 12442 ft: 15386 corp: 24/359b lim: 40 exec/s: 34 rss: 75Mb L: 9/30 MS: 1 ChangeByte- 00:12:12.999 [2024-10-14 17:29:09.953410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffff0100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:09.953435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.999 #35 NEW cov: 12442 ft: 15396 corp: 25/368b lim: 40 exec/s: 35 rss: 75Mb L: 9/30 MS: 1 PersAutoDict- DE: "\377\377\001\000"- 00:12:12.999 [2024-10-14 17:29:09.993681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a08fc00 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:09.993710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:12.999 [2024-10-14 17:29:09.993773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0f00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:09.993787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:12.999 #36 NEW cov: 12442 ft: 15410 corp: 26/385b lim: 40 exec/s: 36 rss: 75Mb L: 17/30 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\017"- 00:12:12.999 [2024-10-14 17:29:10.053743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:12.999 [2024-10-14 17:29:10.053773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.258 #37 NEW cov: 12442 ft: 15436 corp: 27/400b lim: 40 exec/s: 37 rss: 75Mb L: 15/30 MS: 1 CrossOver- 00:12:13.258 [2024-10-14 17:29:10.146676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:0000e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.258 [2024-10-14 17:29:10.146715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.258 [2024-10-14 17:29:10.146808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e1e1e1e1 cdw11:e1e1e1e1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.258 [2024-10-14 17:29:10.146824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:13.258 [2024-10-14 17:29:10.146921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e1e1e100 cdw11:000a0800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.258 [2024-10-14 17:29:10.146935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:13.259 #38 NEW cov: 12442 ft: 15497 corp: 28/428b lim: 40 exec/s: 38 rss: 75Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:12:13.259 [2024-10-14 17:29:10.196313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000016 cdw11:16601616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.259 [2024-10-14 17:29:10.196342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.259 #39 NEW cov: 12442 ft: 15518 corp: 29/443b lim: 40 exec/s: 39 rss: 75Mb L: 15/30 MS: 1 InsertByte- 00:12:13.259 [2024-10-14 17:29:10.267302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080053 cdw11:53535353 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.259 [2024-10-14 17:29:10.267331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.259 [2024-10-14 17:29:10.267433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:53000000 cdw11:00000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.259 [2024-10-14 17:29:10.267451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:13.259 [2024-10-14 17:29:10.267545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.259 [2024-10-14 17:29:10.267561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:13.259 #40 NEW cov: 12442 ft: 15543 corp: 30/469b lim: 40 exec/s: 40 rss: 75Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:12:13.259 [2024-10-14 17:29:10.316751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.259 [2024-10-14 17:29:10.316781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.518 #41 NEW cov: 12442 ft: 15576 corp: 31/484b lim: 40 exec/s: 41 rss: 75Mb L: 15/30 MS: 1 ShuffleBytes- 00:12:13.518 [2024-10-14 17:29:10.387477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a081000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.518 [2024-10-14 17:29:10.387506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.518 [2024-10-14 17:29:10.387610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00080000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.518 [2024-10-14 17:29:10.387627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:13.518 #42 NEW cov: 12442 ft: 15590 corp: 32/502b lim: 40 exec/s: 42 rss: 75Mb L: 18/30 MS: 1 ChangeBit- 00:12:13.518 [2024-10-14 17:29:10.437344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.518 [2024-10-14 17:29:10.437372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.518 #43 NEW cov: 12442 ft: 15620 corp: 33/511b lim: 40 exec/s: 43 rss: 75Mb L: 9/30 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\017"- 00:12:13.518 [2024-10-14 17:29:10.508675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a080000 cdw11:00e50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.518 [2024-10-14 17:29:10.508703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:13.518 [2024-10-14 17:29:10.508812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00001616 cdw11:97979716 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.518 [2024-10-14 17:29:10.508830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:13.518 [2024-10-14 17:29:10.508921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16161616 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:13.518 [2024-10-14 17:29:10.508938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:13.518 #44 NEW cov: 12442 ft: 15634 corp: 34/535b lim: 40 exec/s: 22 rss: 75Mb L: 24/30 MS: 1 InsertRepeatedBytes- 00:12:13.518 #44 DONE cov: 12442 ft: 15634 corp: 34/535b lim: 40 exec/s: 22 rss: 75Mb 00:12:13.518 ###### Recommended dictionary. ###### 00:12:13.518 "\010\000\000\000\000\000\000\000" # Uses: 2 00:12:13.518 "\377\377\001\000" # Uses: 1 00:12:13.518 "\377\377\377\377\377\377\377\017" # Uses: 1 00:12:13.518 ###### End of recommended dictionary. ###### 00:12:13.518 Done 44 runs in 2 second(s) 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:13.778 17:29:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:12:13.778 [2024-10-14 17:29:10.705212] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:13.778 [2024-10-14 17:29:10.705291] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107598 ] 00:12:14.038 [2024-10-14 17:29:10.893499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.038 [2024-10-14 17:29:10.932201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.038 [2024-10-14 17:29:10.991373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.038 [2024-10-14 17:29:11.007514] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:12:14.038 INFO: Running with entropic power schedule (0xFF, 100). 00:12:14.038 INFO: Seed: 1961283473 00:12:14.038 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:14.038 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:14.038 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:12:14.038 INFO: A corpus is not provided, starting from an empty corpus 00:12:14.038 #2 INITED exec/s: 0 rss: 67Mb 00:12:14.038 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:14.038 This may also happen if the target rejected all inputs we tried so far 00:12:14.038 [2024-10-14 17:29:11.066878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.038 [2024-10-14 17:29:11.066907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.038 [2024-10-14 17:29:11.066967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.038 [2024-10-14 17:29:11.066982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.606 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:12:14.606 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:14.606 #4 NEW cov: 12213 ft: 12214 corp: 2/20b lim: 40 exec/s: 0 rss: 74Mb L: 19/19 MS: 2 CrossOver-InsertRepeatedBytes- 00:12:14.606 [2024-10-14 17:29:11.407984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.606 [2024-10-14 17:29:11.408048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.606 [2024-10-14 17:29:11.408130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.606 [2024-10-14 17:29:11.408162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.606 [2024-10-14 17:29:11.408239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.606 [2024-10-14 17:29:11.408264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.607 #10 NEW cov: 12326 ft: 13105 corp: 3/50b lim: 40 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CrossOver- 00:12:14.607 [2024-10-14 17:29:11.477751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.477778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.477832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.477846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.607 #11 NEW cov: 12332 ft: 13342 corp: 4/69b lim: 40 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 ChangeByte- 00:12:14.607 [2024-10-14 17:29:11.517641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.517666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.607 #21 NEW cov: 12417 ft: 14254 corp: 5/83b lim: 40 exec/s: 0 rss: 74Mb L: 14/30 MS: 5 ChangeBinInt-InsertByte-ChangeByte-ChangeBit-CrossOver- 00:12:14.607 [2024-10-14 17:29:11.558249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.558274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.558329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.558343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.558396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.558409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.558462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.558475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:14.607 #22 NEW cov: 12417 ft: 14695 corp: 6/117b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:12:14.607 [2024-10-14 17:29:11.618446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.618472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.618528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.618542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.618596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.618609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.618664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.618677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:14.607 #23 NEW cov: 12417 ft: 14779 corp: 7/156b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:12:14.607 [2024-10-14 17:29:11.678772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.678798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.678854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.678868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.678921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.678934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.678989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.679003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:14.607 [2024-10-14 17:29:11.679052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.607 [2024-10-14 17:29:11.679065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:14.866 #24 NEW cov: 12417 ft: 14899 corp: 8/196b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:12:14.866 [2024-10-14 17:29:11.738423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002823 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.738449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.738504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.738518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.866 #25 NEW cov: 12417 ft: 14926 corp: 9/215b lim: 40 exec/s: 0 rss: 75Mb L: 19/40 MS: 1 ChangeByte- 00:12:14.866 [2024-10-14 17:29:11.799066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.799092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.799149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.799163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.799217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.799230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.799286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.799300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.799354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.799366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:14.866 #26 NEW cov: 12417 ft: 15031 corp: 10/255b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:12:14.866 [2024-10-14 17:29:11.859218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.859243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.859296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.859310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.859364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.859377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.859428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.859441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.859494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.859507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:14.866 #27 NEW cov: 12417 ft: 15087 corp: 11/295b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:12:14.866 [2024-10-14 17:29:11.899317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.899342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.899396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.899410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.899462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.899475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.899525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.899541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.899592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0000002c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.899605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:14.866 #28 NEW cov: 12417 ft: 15169 corp: 12/335b lim: 40 exec/s: 0 rss: 75Mb L: 40/40 MS: 1 InsertByte- 00:12:14.866 [2024-10-14 17:29:11.938956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002823 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.938982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:14.866 [2024-10-14 17:29:11.939042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fe000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:14.866 [2024-10-14 17:29:11.939072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.126 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:15.126 #29 NEW cov: 12440 ft: 15230 corp: 13/354b lim: 40 exec/s: 0 rss: 75Mb L: 19/40 MS: 1 ChangeBinInt- 00:12:15.126 [2024-10-14 17:29:11.999149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002823 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:11.999174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:11.999228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fe000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:11.999242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.126 #30 NEW cov: 12440 ft: 15251 corp: 14/373b lim: 40 exec/s: 0 rss: 76Mb L: 19/40 MS: 1 CMP- DE: "\020\000"- 00:12:15.126 [2024-10-14 17:29:12.059582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.059607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.059660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.059674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.059727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.059739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.059791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.059804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.126 #31 NEW cov: 12440 ft: 15277 corp: 15/408b lim: 40 exec/s: 31 rss: 76Mb L: 35/40 MS: 1 EraseBytes- 00:12:15.126 [2024-10-14 17:29:12.119812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.119840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.119895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.119909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.119961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.119974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.120031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.120045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.126 #32 NEW cov: 12440 ft: 15343 corp: 16/447b lim: 40 exec/s: 32 rss: 76Mb L: 39/40 MS: 1 ChangeBit- 00:12:15.126 [2024-10-14 17:29:12.159876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.159901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.159956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.159970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.160021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.160039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.126 [2024-10-14 17:29:12.160119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.126 [2024-10-14 17:29:12.160133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.126 #33 NEW cov: 12440 ft: 15401 corp: 17/483b lim: 40 exec/s: 33 rss: 76Mb L: 36/40 MS: 1 CopyPart- 00:12:15.385 [2024-10-14 17:29:12.219949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.219974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.220034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.220049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.220102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.220116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.386 #34 NEW cov: 12440 ft: 15450 corp: 18/513b lim: 40 exec/s: 34 rss: 76Mb L: 30/40 MS: 1 ShuffleBytes- 00:12:15.386 [2024-10-14 17:29:12.260332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.260359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.260412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.260425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.260478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000028 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.260490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.260544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.260557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.260610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:0000002c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.260623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:15.386 #35 NEW cov: 12440 ft: 15472 corp: 19/553b lim: 40 exec/s: 35 rss: 76Mb L: 40/40 MS: 1 ChangeBinInt- 00:12:15.386 [2024-10-14 17:29:12.320022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000000d2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.320051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.320102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.320116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.386 #36 NEW cov: 12440 ft: 15529 corp: 20/572b lim: 40 exec/s: 36 rss: 76Mb L: 19/40 MS: 1 ChangeByte- 00:12:15.386 [2024-10-14 17:29:12.360440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.360465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.360519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.360533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.360584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.360597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.360648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.360662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.386 #37 NEW cov: 12440 ft: 15546 corp: 21/607b lim: 40 exec/s: 37 rss: 76Mb L: 35/40 MS: 1 ChangeBit- 00:12:15.386 [2024-10-14 17:29:12.420793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.420821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.420874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.420904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.420956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.420971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.421024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.421044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.386 [2024-10-14 17:29:12.421094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.386 [2024-10-14 17:29:12.421108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:15.386 #38 NEW cov: 12440 ft: 15588 corp: 22/647b lim: 40 exec/s: 38 rss: 76Mb L: 40/40 MS: 1 CMP- DE: "\000\000\000\004"- 00:12:15.645 [2024-10-14 17:29:12.480824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.645 [2024-10-14 17:29:12.480848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.645 [2024-10-14 17:29:12.480903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.645 [2024-10-14 17:29:12.480917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.645 [2024-10-14 17:29:12.480968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.645 [2024-10-14 17:29:12.480982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.481037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.481051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.646 #39 NEW cov: 12440 ft: 15650 corp: 23/682b lim: 40 exec/s: 39 rss: 76Mb L: 35/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:12:15.646 [2024-10-14 17:29:12.520880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.520905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.520959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.520973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.521029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00fbff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.521046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.521101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.521114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.646 #40 NEW cov: 12440 ft: 15659 corp: 24/716b lim: 40 exec/s: 40 rss: 76Mb L: 34/40 MS: 1 ChangeBinInt- 00:12:15.646 [2024-10-14 17:29:12.561008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.561037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.561093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.561107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.561159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.561172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.561225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.561238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.646 #41 NEW cov: 12440 ft: 15683 corp: 25/755b lim: 40 exec/s: 41 rss: 77Mb L: 39/40 MS: 1 CopyPart- 00:12:15.646 [2024-10-14 17:29:12.620874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002823 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.620900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.620954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.620968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.646 #42 NEW cov: 12440 ft: 15704 corp: 26/774b lim: 40 exec/s: 42 rss: 77Mb L: 19/40 MS: 1 ShuffleBytes- 00:12:15.646 [2024-10-14 17:29:12.661017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002822 cdw11:fff60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.661049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.661105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.661120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.646 #43 NEW cov: 12440 ft: 15744 corp: 27/793b lim: 40 exec/s: 43 rss: 77Mb L: 19/40 MS: 1 ChangeBinInt- 00:12:15.646 [2024-10-14 17:29:12.721650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.721676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.721733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.721747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.721799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.721812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.721863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.721876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.646 [2024-10-14 17:29:12.721927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.646 [2024-10-14 17:29:12.721956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:15.906 #44 NEW cov: 12440 ft: 15757 corp: 28/833b lim: 40 exec/s: 44 rss: 77Mb L: 40/40 MS: 1 CopyPart- 00:12:15.906 [2024-10-14 17:29:12.781814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.781841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.781895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.781909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.781961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.781974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.782030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.782043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.782111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.782124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:15.906 #45 NEW cov: 12440 ft: 15789 corp: 29/873b lim: 40 exec/s: 45 rss: 77Mb L: 40/40 MS: 1 ChangeBinInt- 00:12:15.906 [2024-10-14 17:29:12.821482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002800 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.821508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.821564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.821577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.906 #46 NEW cov: 12440 ft: 15800 corp: 30/892b lim: 40 exec/s: 46 rss: 77Mb L: 19/40 MS: 1 ChangeBit- 00:12:15.906 [2024-10-14 17:29:12.861864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:000000af SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.861891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.861946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.861960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.862010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.862023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.862097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fcffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.862111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.906 #47 NEW cov: 12440 ft: 15811 corp: 31/931b lim: 40 exec/s: 47 rss: 77Mb L: 39/40 MS: 1 ChangeBinInt- 00:12:15.906 [2024-10-14 17:29:12.922226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.922251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.922305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.922320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.922371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.922384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.922435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.922448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.922498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.922511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:15.906 #48 NEW cov: 12440 ft: 15850 corp: 32/971b lim: 40 exec/s: 48 rss: 77Mb L: 40/40 MS: 1 CMP- DE: "\002\000"- 00:12:15.906 [2024-10-14 17:29:12.962184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.962209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.962263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.962277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.962330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00b4b4b4 cdw11:b4b4b4b4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.962346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:15.906 [2024-10-14 17:29:12.962399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:b4b4b4b4 cdw11:b4b4000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:15.906 [2024-10-14 17:29:12.962412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:15.906 #49 NEW cov: 12440 ft: 15901 corp: 33/1003b lim: 40 exec/s: 49 rss: 77Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:12:16.166 [2024-10-14 17:29:13.002243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:16.166 [2024-10-14 17:29:13.002269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:16.166 [2024-10-14 17:29:13.002334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:16.166 [2024-10-14 17:29:13.002349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:16.166 [2024-10-14 17:29:13.002416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:16.166 [2024-10-14 17:29:13.002430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:16.166 [2024-10-14 17:29:13.002485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:16.166 [2024-10-14 17:29:13.002498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:16.166 #50 NEW cov: 12440 ft: 15969 corp: 34/1038b lim: 40 exec/s: 50 rss: 77Mb L: 35/40 MS: 1 ChangeBit- 00:12:16.166 [2024-10-14 17:29:13.042025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a002823 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:16.166 [2024-10-14 17:29:13.042057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:16.166 [2024-10-14 17:29:13.042109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:16.166 [2024-10-14 17:29:13.042123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:16.166 #51 NEW cov: 12440 ft: 15980 corp: 35/1057b lim: 40 exec/s: 25 rss: 77Mb L: 19/40 MS: 1 CopyPart- 00:12:16.166 #51 DONE cov: 12440 ft: 15980 corp: 35/1057b lim: 40 exec/s: 25 rss: 77Mb 00:12:16.166 ###### Recommended dictionary. ###### 00:12:16.166 "\020\000" # Uses: 0 00:12:16.166 "\000\000\000\004" # Uses: 0 00:12:16.166 "\001\000\000\000\000\000\000\000" # Uses: 0 00:12:16.166 "\002\000" # Uses: 0 00:12:16.166 ###### End of recommended dictionary. ###### 00:12:16.166 Done 51 runs in 2 second(s) 00:12:16.166 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:12:16.166 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:16.167 17:29:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:12:16.167 [2024-10-14 17:29:13.218342] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:16.167 [2024-10-14 17:29:13.218431] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107892 ] 00:12:16.426 [2024-10-14 17:29:13.412625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.426 [2024-10-14 17:29:13.450786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.426 [2024-10-14 17:29:13.509842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.685 [2024-10-14 17:29:13.525989] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:12:16.685 INFO: Running with entropic power schedule (0xFF, 100). 00:12:16.685 INFO: Seed: 186340858 00:12:16.685 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:16.685 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:16.685 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:12:16.685 INFO: A corpus is not provided, starting from an empty corpus 00:12:16.685 #2 INITED exec/s: 0 rss: 66Mb 00:12:16.685 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:16.685 This may also happen if the target rejected all inputs we tried so far 00:12:16.685 [2024-10-14 17:29:13.570883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.685 [2024-10-14 17:29:13.570918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:16.945 NEW_FUNC[1/713]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:12:16.945 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:16.945 #5 NEW cov: 12180 ft: 12184 corp: 2/11b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:12:16.945 [2024-10-14 17:29:13.941807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffa0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.945 [2024-10-14 17:29:13.941850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:16.945 NEW_FUNC[1/1]: 0x1c026a8 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:595 00:12:16.945 #6 NEW cov: 12314 ft: 12874 corp: 3/21b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:12:16.945 [2024-10-14 17:29:14.032000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff6000 cdw11:0003ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:16.945 [2024-10-14 17:29:14.032043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.204 #7 NEW cov: 12320 ft: 13144 corp: 4/31b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:12:17.204 [2024-10-14 17:29:14.122161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2dffff60 cdw11:000003ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.204 [2024-10-14 17:29:14.122194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.204 #12 NEW cov: 12405 ft: 13432 corp: 5/40b lim: 40 exec/s: 0 rss: 74Mb L: 9/10 MS: 5 ShuffleBytes-ChangeByte-CopyPart-ChangeByte-CrossOver- 00:12:17.204 [2024-10-14 17:29:14.182318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:01000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.204 [2024-10-14 17:29:14.182350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.204 #13 NEW cov: 12405 ft: 13580 corp: 6/54b lim: 40 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 CMP- DE: "\001\000\000\004"- 00:12:17.204 [2024-10-14 17:29:14.242469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2dffff60 cdw11:01000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.204 [2024-10-14 17:29:14.242501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.463 #14 NEW cov: 12405 ft: 13626 corp: 7/63b lim: 40 exec/s: 0 rss: 74Mb L: 9/14 MS: 1 PersAutoDict- DE: "\001\000\000\004"- 00:12:17.463 [2024-10-14 17:29:14.332670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:01000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.463 [2024-10-14 17:29:14.332701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.463 #15 NEW cov: 12405 ft: 13702 corp: 8/77b lim: 40 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 ChangeByte- 00:12:17.463 [2024-10-14 17:29:14.422940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2dffff0a cdw11:60000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.463 [2024-10-14 17:29:14.422972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.463 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:17.463 #21 NEW cov: 12428 ft: 13747 corp: 9/87b lim: 40 exec/s: 0 rss: 74Mb L: 10/14 MS: 1 CrossOver- 00:12:17.463 [2024-10-14 17:29:14.483060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.463 [2024-10-14 17:29:14.483090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.463 #22 NEW cov: 12428 ft: 13766 corp: 10/101b lim: 40 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 PersAutoDict- DE: "\001\000\000\004"- 00:12:17.463 [2024-10-14 17:29:14.533248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.463 [2024-10-14 17:29:14.533279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.463 [2024-10-14 17:29:14.533328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.463 [2024-10-14 17:29:14.533348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:17.722 #23 NEW cov: 12428 ft: 14129 corp: 11/124b lim: 40 exec/s: 23 rss: 74Mb L: 23/23 MS: 1 CrossOver- 00:12:17.722 [2024-10-14 17:29:14.623474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffa0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.722 [2024-10-14 17:29:14.623506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.722 #24 NEW cov: 12428 ft: 14141 corp: 12/134b lim: 40 exec/s: 24 rss: 74Mb L: 10/23 MS: 1 CopyPart- 00:12:17.722 [2024-10-14 17:29:14.683581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.722 [2024-10-14 17:29:14.683612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.722 #25 NEW cov: 12428 ft: 14161 corp: 13/144b lim: 40 exec/s: 25 rss: 74Mb L: 10/23 MS: 1 ShuffleBytes- 00:12:17.722 [2024-10-14 17:29:14.733712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffe4a0 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.722 [2024-10-14 17:29:14.733742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.722 #26 NEW cov: 12428 ft: 14195 corp: 14/155b lim: 40 exec/s: 26 rss: 74Mb L: 11/23 MS: 1 InsertByte- 00:12:17.722 [2024-10-14 17:29:14.783838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0104 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.722 [2024-10-14 17:29:14.783868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.981 #27 NEW cov: 12428 ft: 14229 corp: 15/169b lim: 40 exec/s: 27 rss: 74Mb L: 14/23 MS: 1 ShuffleBytes- 00:12:17.981 [2024-10-14 17:29:14.874096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2dff60ff cdw11:01000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.981 [2024-10-14 17:29:14.874127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.982 #28 NEW cov: 12428 ft: 14284 corp: 16/178b lim: 40 exec/s: 28 rss: 74Mb L: 9/23 MS: 1 ShuffleBytes- 00:12:17.982 [2024-10-14 17:29:14.964349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffa0ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.982 [2024-10-14 17:29:14.964380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.982 #29 NEW cov: 12428 ft: 14306 corp: 17/192b lim: 40 exec/s: 29 rss: 74Mb L: 14/23 MS: 1 PersAutoDict- DE: "\001\000\000\004"- 00:12:17.982 [2024-10-14 17:29:15.014482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2dff0000 cdw11:01ff6004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.982 [2024-10-14 17:29:15.014514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:17.982 #30 NEW cov: 12428 ft: 14331 corp: 18/201b lim: 40 exec/s: 30 rss: 74Mb L: 9/23 MS: 1 ShuffleBytes- 00:12:17.982 [2024-10-14 17:29:15.064627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:17.982 [2024-10-14 17:29:15.064660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.240 #31 NEW cov: 12428 ft: 14335 corp: 19/211b lim: 40 exec/s: 31 rss: 74Mb L: 10/23 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:12:18.240 [2024-10-14 17:29:15.154821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2d0160ff cdw11:00ff0004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.240 [2024-10-14 17:29:15.154855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.240 #32 NEW cov: 12428 ft: 14351 corp: 20/220b lim: 40 exec/s: 32 rss: 74Mb L: 9/23 MS: 1 ShuffleBytes- 00:12:18.240 [2024-10-14 17:29:15.204936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffa0ff cdw11:ffffff12 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.240 [2024-10-14 17:29:15.204966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.240 #33 NEW cov: 12428 ft: 14386 corp: 21/235b lim: 40 exec/s: 33 rss: 75Mb L: 15/23 MS: 1 InsertByte- 00:12:18.240 [2024-10-14 17:29:15.295370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.240 [2024-10-14 17:29:15.295401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.240 [2024-10-14 17:29:15.295452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.240 [2024-10-14 17:29:15.295468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:18.240 [2024-10-14 17:29:15.295500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.240 [2024-10-14 17:29:15.295516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:18.240 [2024-10-14 17:29:15.295547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00002dff cdw11:ff600000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.240 [2024-10-14 17:29:15.295563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:18.499 #34 NEW cov: 12428 ft: 14952 corp: 22/270b lim: 40 exec/s: 34 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:12:18.499 [2024-10-14 17:29:15.355370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffa0ff cdw11:12ffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.499 [2024-10-14 17:29:15.355400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.499 #35 NEW cov: 12428 ft: 14964 corp: 23/282b lim: 40 exec/s: 35 rss: 75Mb L: 12/35 MS: 1 EraseBytes- 00:12:18.499 [2024-10-14 17:29:15.445592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2dff60ff cdw11:0100003d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.499 [2024-10-14 17:29:15.445622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.499 #36 NEW cov: 12428 ft: 15000 corp: 24/291b lim: 40 exec/s: 36 rss: 75Mb L: 9/35 MS: 1 ChangeByte- 00:12:18.499 [2024-10-14 17:29:15.535888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.499 [2024-10-14 17:29:15.535919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.499 [2024-10-14 17:29:15.535953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.499 [2024-10-14 17:29:15.535969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:18.758 [2024-10-14 17:29:15.626149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.758 [2024-10-14 17:29:15.626184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:18.758 [2024-10-14 17:29:15.626219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.758 [2024-10-14 17:29:15.626235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:18.759 #38 NEW cov: 12428 ft: 15044 corp: 25/309b lim: 40 exec/s: 19 rss: 75Mb L: 18/35 MS: 2 PersAutoDict-ShuffleBytes- DE: "\377\377\377\377\377\377\377\377"- 00:12:18.759 #38 DONE cov: 12428 ft: 15044 corp: 25/309b lim: 40 exec/s: 19 rss: 75Mb 00:12:18.759 ###### Recommended dictionary. ###### 00:12:18.759 "\001\000\000\004" # Uses: 3 00:12:18.759 "\377\377\377\377\377\377\377\377" # Uses: 1 00:12:18.759 ###### End of recommended dictionary. ###### 00:12:18.759 Done 38 runs in 2 second(s) 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:18.759 17:29:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:12:18.759 [2024-10-14 17:29:15.817986] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:18.759 [2024-10-14 17:29:15.818075] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108206 ] 00:12:19.017 [2024-10-14 17:29:16.019275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.017 [2024-10-14 17:29:16.057889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.411 [2024-10-14 17:29:16.116953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.411 [2024-10-14 17:29:16.133128] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:12:19.411 INFO: Running with entropic power schedule (0xFF, 100). 00:12:19.411 INFO: Seed: 2794341511 00:12:19.411 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:19.411 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:19.411 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:12:19.411 INFO: A corpus is not provided, starting from an empty corpus 00:12:19.411 #2 INITED exec/s: 0 rss: 66Mb 00:12:19.411 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:19.411 This may also happen if the target rejected all inputs we tried so far 00:12:19.411 [2024-10-14 17:29:16.199061] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.411 [2024-10-14 17:29:16.199100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.411 [2024-10-14 17:29:16.199177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.411 [2024-10-14 17:29:16.199195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.411 [2024-10-14 17:29:16.199254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.412 [2024-10-14 17:29:16.199270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:19.671 NEW_FUNC[1/716]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:12:19.671 NEW_FUNC[2/716]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:12:19.671 #8 NEW cov: 12205 ft: 12206 corp: 2/23b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:12:19.671 [2024-10-14 17:29:16.540396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.540460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.540550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.540577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.540661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.540687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.540770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.540798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:19.671 #12 NEW cov: 12325 ft: 13119 corp: 3/57b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 4 InsertByte-ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:12:19.671 [2024-10-14 17:29:16.590183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.590210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.590286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.590300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.590360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.590377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.590437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.671 [2024-10-14 17:29:16.590450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:19.671 [2024-10-14 17:29:16.590507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.590521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:19.672 #13 NEW cov: 12331 ft: 13451 corp: 4/92b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:12:19.672 [2024-10-14 17:29:16.649845] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.649871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.672 [2024-10-14 17:29:16.649930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.649944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.672 #14 NEW cov: 12416 ft: 13931 corp: 5/110b lim: 35 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 EraseBytes- 00:12:19.672 [2024-10-14 17:29:16.689943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.689968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.672 [2024-10-14 17:29:16.690031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.690061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.672 #20 NEW cov: 12416 ft: 13987 corp: 6/128b lim: 35 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 ChangeByte- 00:12:19.672 [2024-10-14 17:29:16.750142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.750167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.672 [2024-10-14 17:29:16.750242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.672 [2024-10-14 17:29:16.750256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.931 #21 NEW cov: 12416 ft: 14120 corp: 7/147b lim: 35 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 InsertByte- 00:12:19.931 [2024-10-14 17:29:16.810153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:4 cdw10:8000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.810180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.931 #23 NEW cov: 12416 ft: 14822 corp: 8/157b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 2 ChangeBit-InsertRepeatedBytes- 00:12:19.931 [2024-10-14 17:29:16.850422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.850448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.931 [2024-10-14 17:29:16.850507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.850524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.931 #24 NEW cov: 12416 ft: 14885 corp: 9/171b lim: 35 exec/s: 0 rss: 74Mb L: 14/35 MS: 1 EraseBytes- 00:12:19.931 [2024-10-14 17:29:16.890674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.890701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.931 [2024-10-14 17:29:16.890760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.890776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.931 [2024-10-14 17:29:16.890836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.890851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:19.931 #25 NEW cov: 12416 ft: 14946 corp: 10/195b lim: 35 exec/s: 0 rss: 74Mb L: 24/35 MS: 1 CMP- DE: "\001\005"- 00:12:19.931 [2024-10-14 17:29:16.950665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.950691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:19.931 [2024-10-14 17:29:16.950750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:16.950765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.931 #26 NEW cov: 12416 ft: 15016 corp: 11/214b lim: 35 exec/s: 0 rss: 74Mb L: 19/35 MS: 1 ChangeBit- 00:12:19.931 [2024-10-14 17:29:17.011047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ERROR_RECOVERY cid:5 cdw10:80000005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:17.011075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:19.931 [2024-10-14 17:29:17.011161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:19.931 [2024-10-14 17:29:17.011178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.190 NEW_FUNC[1/2]: 0x46dbb8 in feat_error_recover /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:304 00:12:20.190 NEW_FUNC[2/2]: 0x134ea48 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1766 00:12:20.190 #27 NEW cov: 12462 ft: 15120 corp: 12/239b lim: 35 exec/s: 0 rss: 74Mb L: 25/35 MS: 1 CrossOver- 00:12:20.190 [2024-10-14 17:29:17.060866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.060892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.191 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:20.191 #28 NEW cov: 12485 ft: 15212 corp: 13/251b lim: 35 exec/s: 0 rss: 74Mb L: 12/35 MS: 1 EraseBytes- 00:12:20.191 [2024-10-14 17:29:17.121642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.121669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.121749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.121765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.121824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.121838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.121896] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.121910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.121971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.121986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:20.191 #29 NEW cov: 12485 ft: 15234 corp: 14/286b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:12:20.191 [2024-10-14 17:29:17.181793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.181820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.181898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.181913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.181970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.181983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.182041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.182055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.182125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.182139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:20.191 #30 NEW cov: 12485 ft: 15257 corp: 15/321b lim: 35 exec/s: 30 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:12:20.191 [2024-10-14 17:29:17.221893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.221919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.221996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.222011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.222066] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.222081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.222142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.222170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.191 [2024-10-14 17:29:17.222232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.222246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:20.191 #31 NEW cov: 12485 ft: 15288 corp: 16/356b lim: 35 exec/s: 31 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:12:20.191 [2024-10-14 17:29:17.261393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:4 cdw10:8000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.191 [2024-10-14 17:29:17.261420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.450 #32 NEW cov: 12485 ft: 15326 corp: 17/364b lim: 35 exec/s: 32 rss: 75Mb L: 8/35 MS: 1 EraseBytes- 00:12:20.450 [2024-10-14 17:29:17.321728] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.450 [2024-10-14 17:29:17.321753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.450 [2024-10-14 17:29:17.321825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.450 [2024-10-14 17:29:17.321840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.450 #33 NEW cov: 12485 ft: 15370 corp: 18/379b lim: 35 exec/s: 33 rss: 75Mb L: 15/35 MS: 1 EraseBytes- 00:12:20.450 [2024-10-14 17:29:17.361851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.450 [2024-10-14 17:29:17.361876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.450 [2024-10-14 17:29:17.361937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.450 [2024-10-14 17:29:17.361951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.450 #34 NEW cov: 12485 ft: 15422 corp: 19/397b lim: 35 exec/s: 34 rss: 75Mb L: 18/35 MS: 1 PersAutoDict- DE: "\001\005"- 00:12:20.450 [2024-10-14 17:29:17.402121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.450 [2024-10-14 17:29:17.402148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.450 [2024-10-14 17:29:17.402204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.402220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.402276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.402292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.451 #35 NEW cov: 12485 ft: 15521 corp: 20/419b lim: 35 exec/s: 35 rss: 75Mb L: 22/35 MS: 1 ChangeBit- 00:12:20.451 [2024-10-14 17:29:17.442070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.442095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.442168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.442186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.451 #36 NEW cov: 12485 ft: 15539 corp: 21/438b lim: 35 exec/s: 36 rss: 75Mb L: 19/35 MS: 1 ShuffleBytes- 00:12:20.451 [2024-10-14 17:29:17.482521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.482546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.482621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.482636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.482691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.482704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.482761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.482775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.451 #37 NEW cov: 12485 ft: 15565 corp: 22/472b lim: 35 exec/s: 37 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:12:20.451 [2024-10-14 17:29:17.522511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.522537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.522611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.522626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.451 [2024-10-14 17:29:17.522682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.451 [2024-10-14 17:29:17.522695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.710 #38 NEW cov: 12485 ft: 15610 corp: 23/498b lim: 35 exec/s: 38 rss: 75Mb L: 26/35 MS: 1 CrossOver- 00:12:20.710 [2024-10-14 17:29:17.562731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.562756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.710 [2024-10-14 17:29:17.562815] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.562829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.710 [2024-10-14 17:29:17.562888] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.562901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.710 [2024-10-14 17:29:17.562957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.562971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.710 #39 NEW cov: 12485 ft: 15616 corp: 24/532b lim: 35 exec/s: 39 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:12:20.710 [2024-10-14 17:29:17.602546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.602573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.710 [2024-10-14 17:29:17.602631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.602645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.710 #40 NEW cov: 12485 ft: 15637 corp: 25/550b lim: 35 exec/s: 40 rss: 75Mb L: 18/35 MS: 1 CrossOver- 00:12:20.710 [2024-10-14 17:29:17.662607] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:4 cdw10:8000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.710 [2024-10-14 17:29:17.662634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.711 #41 NEW cov: 12485 ft: 15656 corp: 26/560b lim: 35 exec/s: 41 rss: 75Mb L: 10/35 MS: 1 PersAutoDict- DE: "\001\005"- 00:12:20.711 [2024-10-14 17:29:17.702812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.702838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.711 [2024-10-14 17:29:17.702910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.702925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.711 #42 NEW cov: 12485 ft: 15691 corp: 27/578b lim: 35 exec/s: 42 rss: 75Mb L: 18/35 MS: 1 ShuffleBytes- 00:12:20.711 [2024-10-14 17:29:17.743434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.743459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.711 [2024-10-14 17:29:17.743532] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.743548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.711 [2024-10-14 17:29:17.743603] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.743618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.711 [2024-10-14 17:29:17.743673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.743687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.711 [2024-10-14 17:29:17.743743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.711 [2024-10-14 17:29:17.743757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:12:20.711 #43 NEW cov: 12485 ft: 15713 corp: 28/613b lim: 35 exec/s: 43 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:12:20.970 [2024-10-14 17:29:17.803159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.803186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:17.803246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.803263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.970 #44 NEW cov: 12485 ft: 15732 corp: 29/632b lim: 35 exec/s: 44 rss: 75Mb L: 19/35 MS: 1 InsertByte- 00:12:20.970 [2024-10-14 17:29:17.843248] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.843273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:17.843344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.843359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.970 #45 NEW cov: 12485 ft: 15766 corp: 30/652b lim: 35 exec/s: 45 rss: 75Mb L: 20/35 MS: 1 InsertByte- 00:12:20.970 [2024-10-14 17:29:17.903637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ERROR_RECOVERY cid:5 cdw10:80000005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.903664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:17.903722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.903738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.970 #46 NEW cov: 12485 ft: 15783 corp: 31/678b lim: 35 exec/s: 46 rss: 75Mb L: 26/35 MS: 1 InsertByte- 00:12:20.970 [2024-10-14 17:29:17.963593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.963617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:17.963694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:17.963709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.970 #47 NEW cov: 12485 ft: 15793 corp: 32/696b lim: 35 exec/s: 47 rss: 75Mb L: 18/35 MS: 1 ChangeByte- 00:12:20.970 [2024-10-14 17:29:18.024089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:18.024115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:18.024173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:18.024186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:18.024243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:18.024257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:20.970 [2024-10-14 17:29:18.024312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:20.970 [2024-10-14 17:29:18.024326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:20.970 #48 NEW cov: 12485 ft: 15800 corp: 33/730b lim: 35 exec/s: 48 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:12:21.230 [2024-10-14 17:29:18.064050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.064079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:21.230 [2024-10-14 17:29:18.064138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.064152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:21.230 [2024-10-14 17:29:18.064211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.064225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:21.230 #49 NEW cov: 12485 ft: 15893 corp: 34/751b lim: 35 exec/s: 49 rss: 75Mb L: 21/35 MS: 1 CMP- DE: "\016\000"- 00:12:21.230 [2024-10-14 17:29:18.104336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.104361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:21.230 [2024-10-14 17:29:18.104437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.104452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:21.230 [2024-10-14 17:29:18.104521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.104535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:21.230 [2024-10-14 17:29:18.104592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.104606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:21.230 #50 NEW cov: 12485 ft: 15913 corp: 35/785b lim: 35 exec/s: 50 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:12:21.230 [2024-10-14 17:29:18.144067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.144092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:21.230 [2024-10-14 17:29:18.144152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:21.230 [2024-10-14 17:29:18.144166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:21.230 #51 NEW cov: 12485 ft: 15957 corp: 36/803b lim: 35 exec/s: 25 rss: 75Mb L: 18/35 MS: 1 ShuffleBytes- 00:12:21.230 #51 DONE cov: 12485 ft: 15957 corp: 36/803b lim: 35 exec/s: 25 rss: 75Mb 00:12:21.230 ###### Recommended dictionary. ###### 00:12:21.230 "\001\005" # Uses: 2 00:12:21.230 "\016\000" # Uses: 0 00:12:21.230 ###### End of recommended dictionary. ###### 00:12:21.230 Done 51 runs in 2 second(s) 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:12:21.230 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:12:21.231 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:21.231 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:21.231 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:21.231 17:29:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:12:21.490 [2024-10-14 17:29:18.337035] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:21.490 [2024-10-14 17:29:18.337107] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108514 ] 00:12:21.490 [2024-10-14 17:29:18.533287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.490 [2024-10-14 17:29:18.572433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.749 [2024-10-14 17:29:18.631694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.749 [2024-10-14 17:29:18.647850] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:12:21.749 INFO: Running with entropic power schedule (0xFF, 100). 00:12:21.749 INFO: Seed: 1013380375 00:12:21.749 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:21.749 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:21.749 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:12:21.749 INFO: A corpus is not provided, starting from an empty corpus 00:12:21.749 #2 INITED exec/s: 0 rss: 66Mb 00:12:21.749 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:21.749 This may also happen if the target rejected all inputs we tried so far 00:12:21.749 [2024-10-14 17:29:18.715033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:21.749 [2024-10-14 17:29:18.715081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.008 NEW_FUNC[1/714]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:12:22.008 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:22.008 #5 NEW cov: 12182 ft: 12181 corp: 2/13b lim: 35 exec/s: 0 rss: 74Mb L: 12/12 MS: 3 ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:12:22.008 [2024-10-14 17:29:19.056826] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000040e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.008 [2024-10-14 17:29:19.056870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.008 [2024-10-14 17:29:19.056970] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.009 [2024-10-14 17:29:19.056992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:22.009 [2024-10-14 17:29:19.057112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.009 [2024-10-14 17:29:19.057130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:22.009 #7 NEW cov: 12296 ft: 13151 corp: 3/34b lim: 35 exec/s: 0 rss: 74Mb L: 21/21 MS: 2 ChangeBit-InsertRepeatedBytes- 00:12:22.267 [2024-10-14 17:29:19.116577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.267 [2024-10-14 17:29:19.116608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.267 #8 NEW cov: 12302 ft: 13388 corp: 4/47b lim: 35 exec/s: 0 rss: 74Mb L: 13/21 MS: 1 InsertByte- 00:12:22.267 [2024-10-14 17:29:19.187636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.267 [2024-10-14 17:29:19.187666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.268 [2024-10-14 17:29:19.187780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.268 [2024-10-14 17:29:19.187799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:22.268 [2024-10-14 17:29:19.187894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.268 [2024-10-14 17:29:19.187912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:22.268 #9 NEW cov: 12387 ft: 13691 corp: 5/68b lim: 35 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:12:22.268 [2024-10-14 17:29:19.257506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.268 [2024-10-14 17:29:19.257535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.268 #10 NEW cov: 12387 ft: 13738 corp: 6/80b lim: 35 exec/s: 0 rss: 74Mb L: 12/21 MS: 1 ChangeByte- 00:12:22.268 [2024-10-14 17:29:19.308259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.268 [2024-10-14 17:29:19.308287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.268 [2024-10-14 17:29:19.308390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.268 [2024-10-14 17:29:19.308406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:22.268 #11 NEW cov: 12387 ft: 13973 corp: 7/95b lim: 35 exec/s: 0 rss: 74Mb L: 15/21 MS: 1 CopyPart- 00:12:22.526 [2024-10-14 17:29:19.378253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.526 [2024-10-14 17:29:19.378280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.526 #12 NEW cov: 12387 ft: 14035 corp: 8/107b lim: 35 exec/s: 0 rss: 74Mb L: 12/21 MS: 1 ChangeBinInt- 00:12:22.526 [2024-10-14 17:29:19.428872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.526 [2024-10-14 17:29:19.428899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.526 [2024-10-14 17:29:19.428995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.526 [2024-10-14 17:29:19.429015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:22.526 #13 NEW cov: 12387 ft: 14053 corp: 9/122b lim: 35 exec/s: 0 rss: 74Mb L: 15/21 MS: 1 ChangeBit- 00:12:22.526 [2024-10-14 17:29:19.498943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.526 [2024-10-14 17:29:19.498971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.526 #14 NEW cov: 12387 ft: 14099 corp: 10/134b lim: 35 exec/s: 0 rss: 74Mb L: 12/21 MS: 1 ChangeByte- 00:12:22.526 [2024-10-14 17:29:19.549299] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.526 [2024-10-14 17:29:19.549327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.526 #15 NEW cov: 12387 ft: 14146 corp: 11/146b lim: 35 exec/s: 0 rss: 74Mb L: 12/21 MS: 1 ChangeByte- 00:12:22.526 [2024-10-14 17:29:19.599483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.526 [2024-10-14 17:29:19.599510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.785 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:22.785 #16 NEW cov: 12410 ft: 14247 corp: 12/158b lim: 35 exec/s: 0 rss: 74Mb L: 12/21 MS: 1 ChangeASCIIInt- 00:12:22.785 [2024-10-14 17:29:19.669876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.785 [2024-10-14 17:29:19.669903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.785 #17 NEW cov: 12410 ft: 14268 corp: 13/170b lim: 35 exec/s: 17 rss: 74Mb L: 12/21 MS: 1 ChangeBit- 00:12:22.785 [2024-10-14 17:29:19.740169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.785 [2024-10-14 17:29:19.740197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.785 #18 NEW cov: 12410 ft: 14277 corp: 14/182b lim: 35 exec/s: 18 rss: 74Mb L: 12/21 MS: 1 ChangeBit- 00:12:22.785 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:12:22.785 #19 NEW cov: 12424 ft: 14372 corp: 15/195b lim: 35 exec/s: 19 rss: 74Mb L: 13/21 MS: 1 CrossOver- 00:12:22.785 [2024-10-14 17:29:19.841428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.785 [2024-10-14 17:29:19.841459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:22.785 [2024-10-14 17:29:19.841561] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.785 [2024-10-14 17:29:19.841579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:23.044 #20 NEW cov: 12424 ft: 14402 corp: 16/210b lim: 35 exec/s: 20 rss: 74Mb L: 15/21 MS: 1 ShuffleBytes- 00:12:23.044 [2024-10-14 17:29:19.911291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.044 [2024-10-14 17:29:19.911322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.044 #21 NEW cov: 12424 ft: 14418 corp: 17/223b lim: 35 exec/s: 21 rss: 74Mb L: 13/21 MS: 1 InsertByte- 00:12:23.044 [2024-10-14 17:29:19.961792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.044 [2024-10-14 17:29:19.961826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.044 [2024-10-14 17:29:19.961925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.044 [2024-10-14 17:29:19.961941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:23.044 #22 NEW cov: 12424 ft: 14430 corp: 18/241b lim: 35 exec/s: 22 rss: 75Mb L: 18/21 MS: 1 InsertRepeatedBytes- 00:12:23.044 [2024-10-14 17:29:20.031758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.044 [2024-10-14 17:29:20.031791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.044 #23 NEW cov: 12424 ft: 14526 corp: 19/253b lim: 35 exec/s: 23 rss: 75Mb L: 12/21 MS: 1 ChangeByte- 00:12:23.044 [2024-10-14 17:29:20.081987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.044 [2024-10-14 17:29:20.082018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.044 #24 NEW cov: 12424 ft: 14547 corp: 20/265b lim: 35 exec/s: 24 rss: 75Mb L: 12/21 MS: 1 ShuffleBytes- 00:12:23.044 [2024-10-14 17:29:20.132237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.044 [2024-10-14 17:29:20.132267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.303 #25 NEW cov: 12424 ft: 14593 corp: 21/277b lim: 35 exec/s: 25 rss: 75Mb L: 12/21 MS: 1 ChangeBit- 00:12:23.303 [2024-10-14 17:29:20.202691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.303 [2024-10-14 17:29:20.202721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.303 #26 NEW cov: 12424 ft: 14607 corp: 22/288b lim: 35 exec/s: 26 rss: 75Mb L: 11/21 MS: 1 EraseBytes- 00:12:23.303 [2024-10-14 17:29:20.252881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.303 [2024-10-14 17:29:20.252912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.303 #27 NEW cov: 12424 ft: 14615 corp: 23/300b lim: 35 exec/s: 27 rss: 75Mb L: 12/21 MS: 1 ChangeBit- 00:12:23.303 [2024-10-14 17:29:20.303339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.303 [2024-10-14 17:29:20.303369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.303 #28 NEW cov: 12424 ft: 14622 corp: 24/311b lim: 35 exec/s: 28 rss: 75Mb L: 11/21 MS: 1 CopyPart- 00:12:23.303 [2024-10-14 17:29:20.373999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.303 [2024-10-14 17:29:20.374029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.303 [2024-10-14 17:29:20.374129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.303 [2024-10-14 17:29:20.374146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:23.562 #29 NEW cov: 12424 ft: 14643 corp: 25/329b lim: 35 exec/s: 29 rss: 75Mb L: 18/21 MS: 1 ChangeBit- 00:12:23.562 [2024-10-14 17:29:20.445281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.562 [2024-10-14 17:29:20.445312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:12:23.562 [2024-10-14 17:29:20.445417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000005b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.562 [2024-10-14 17:29:20.445434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:12:23.562 [2024-10-14 17:29:20.445540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.563 [2024-10-14 17:29:20.445554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:12:23.563 #30 NEW cov: 12424 ft: 14986 corp: 26/357b lim: 35 exec/s: 30 rss: 75Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:12:23.563 [2024-10-14 17:29:20.494319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.563 [2024-10-14 17:29:20.494346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.563 #31 NEW cov: 12424 ft: 15029 corp: 27/369b lim: 35 exec/s: 31 rss: 75Mb L: 12/28 MS: 1 ChangeByte- 00:12:23.563 [2024-10-14 17:29:20.564792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.563 [2024-10-14 17:29:20.564819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.563 #32 NEW cov: 12424 ft: 15067 corp: 28/382b lim: 35 exec/s: 32 rss: 75Mb L: 13/28 MS: 1 InsertByte- 00:12:23.563 [2024-10-14 17:29:20.635273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.563 [2024-10-14 17:29:20.635300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.822 #33 NEW cov: 12424 ft: 15115 corp: 29/394b lim: 35 exec/s: 33 rss: 75Mb L: 12/28 MS: 1 CrossOver- 00:12:23.822 [2024-10-14 17:29:20.685775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:23.822 [2024-10-14 17:29:20.685801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:12:23.822 #34 NEW cov: 12424 ft: 15124 corp: 30/406b lim: 35 exec/s: 17 rss: 75Mb L: 12/28 MS: 1 ChangeBit- 00:12:23.822 #34 DONE cov: 12424 ft: 15124 corp: 30/406b lim: 35 exec/s: 17 rss: 75Mb 00:12:23.822 Done 34 runs in 2 second(s) 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:23.822 17:29:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:12:23.822 [2024-10-14 17:29:20.860248] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:23.823 [2024-10-14 17:29:20.860315] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108867 ] 00:12:24.082 [2024-10-14 17:29:21.052164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.082 [2024-10-14 17:29:21.091548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.082 [2024-10-14 17:29:21.150783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.082 [2024-10-14 17:29:21.166932] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:12:24.341 INFO: Running with entropic power schedule (0xFF, 100). 00:12:24.341 INFO: Seed: 3530383504 00:12:24.341 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:24.341 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:24.341 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:12:24.341 INFO: A corpus is not provided, starting from an empty corpus 00:12:24.341 #2 INITED exec/s: 0 rss: 66Mb 00:12:24.341 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:24.341 This may also happen if the target rejected all inputs we tried so far 00:12:24.341 [2024-10-14 17:29:21.237867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.341 [2024-10-14 17:29:21.237911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.341 [2024-10-14 17:29:21.237979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.341 [2024-10-14 17:29:21.238000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.341 [2024-10-14 17:29:21.238092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.341 [2024-10-14 17:29:21.238110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.341 [2024-10-14 17:29:21.238200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.341 [2024-10-14 17:29:21.238221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.341 [2024-10-14 17:29:21.238313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.341 [2024-10-14 17:29:21.238333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:24.601 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:12:24.601 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:24.601 #10 NEW cov: 12287 ft: 12286 corp: 2/106b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 3 InsertRepeatedBytes-ChangeByte-InsertRepeatedBytes- 00:12:24.601 [2024-10-14 17:29:21.588812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.588862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.588947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.588963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.589047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.589077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.589166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.589185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.589286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.589305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:24.601 #16 NEW cov: 12400 ft: 12758 corp: 3/211b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 ShuffleBytes- 00:12:24.601 [2024-10-14 17:29:21.659269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946727 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.659299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.659395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.659413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.659493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.659509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.659598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.659616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.601 [2024-10-14 17:29:21.659701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.601 [2024-10-14 17:29:21.659719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:24.601 #17 NEW cov: 12406 ft: 13069 corp: 4/316b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 CopyPart- 00:12:24.861 [2024-10-14 17:29:21.708836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.708865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.708938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.708958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.709043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.709059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.861 #18 NEW cov: 12491 ft: 13854 corp: 5/389b lim: 105 exec/s: 0 rss: 74Mb L: 73/105 MS: 1 CrossOver- 00:12:24.861 [2024-10-14 17:29:21.759202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1736392635849121304 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.759232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.759305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.759324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.759381] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.759398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.861 #19 NEW cov: 12491 ft: 13936 corp: 6/462b lim: 105 exec/s: 0 rss: 74Mb L: 73/105 MS: 1 ChangeBinInt- 00:12:24.861 [2024-10-14 17:29:21.829371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.829400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.829505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.829521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.861 #23 NEW cov: 12491 ft: 14337 corp: 7/507b lim: 105 exec/s: 0 rss: 74Mb L: 45/105 MS: 4 CopyPart-InsertByte-CrossOver-CrossOver- 00:12:24.861 [2024-10-14 17:29:21.880416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.880445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.880550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.880567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.880645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.880669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.880755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:11791448172606497699 len:41892 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.880771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.880857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:11791523231454962595 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.880873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:24.861 #24 NEW cov: 12491 ft: 14413 corp: 8/612b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:12:24.861 [2024-10-14 17:29:21.930643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946727 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.930673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.930770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.930787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.930872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.930891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.930991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7271035106627151847 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.931007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:24.861 [2024-10-14 17:29:21.931096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:24.861 [2024-10-14 17:29:21.931112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.121 #25 NEW cov: 12491 ft: 14472 corp: 9/717b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 CopyPart- 00:12:25.121 [2024-10-14 17:29:22.000872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946727 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.000902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.000994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.001011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.001104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.001122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.001218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7271035106627151847 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.001238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.001337] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.001355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.121 #26 NEW cov: 12491 ft: 14548 corp: 10/822b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 ChangeByte- 00:12:25.121 [2024-10-14 17:29:22.071267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946726 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.071297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.071377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.071395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.071457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.071473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.071563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7271035106627151847 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.071581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.071677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.071693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.121 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:25.121 #27 NEW cov: 12514 ft: 14613 corp: 11/927b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 ChangeBit- 00:12:25.121 [2024-10-14 17:29:22.141788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946727 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.141815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.141897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.141915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.142000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.142017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.142121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7271061597985433575 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.142156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.142250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.142272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.121 #28 NEW cov: 12514 ft: 14694 corp: 12/1032b lim: 105 exec/s: 0 rss: 74Mb L: 105/105 MS: 1 CMP- DE: "\377\377\377\036"- 00:12:25.121 [2024-10-14 17:29:22.191360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.191388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.121 [2024-10-14 17:29:22.191483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.121 [2024-10-14 17:29:22.191504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.381 #29 NEW cov: 12514 ft: 14746 corp: 13/1077b lim: 105 exec/s: 29 rss: 75Mb L: 45/105 MS: 1 ChangeBit- 00:12:25.381 [2024-10-14 17:29:22.261942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.261975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.381 [2024-10-14 17:29:22.262061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.262078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.381 [2024-10-14 17:29:22.262154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.262172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.381 #30 NEW cov: 12514 ft: 14770 corp: 14/1150b lim: 105 exec/s: 30 rss: 75Mb L: 73/105 MS: 1 CopyPart- 00:12:25.381 [2024-10-14 17:29:22.311743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.311774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.381 [2024-10-14 17:29:22.311849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59376 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.311866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.381 #31 NEW cov: 12514 ft: 14796 corp: 15/1193b lim: 105 exec/s: 31 rss: 75Mb L: 43/105 MS: 1 EraseBytes- 00:12:25.381 [2024-10-14 17:29:22.382123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.382154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.381 [2024-10-14 17:29:22.382247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579908415842279 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.382263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.381 #32 NEW cov: 12514 ft: 14824 corp: 16/1238b lim: 105 exec/s: 32 rss: 75Mb L: 45/105 MS: 1 ChangeByte- 00:12:25.381 [2024-10-14 17:29:22.432313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.432343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.381 [2024-10-14 17:29:22.432433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.381 [2024-10-14 17:29:22.432451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.381 #33 NEW cov: 12514 ft: 14870 corp: 17/1283b lim: 105 exec/s: 33 rss: 75Mb L: 45/105 MS: 1 ChangeBinInt- 00:12:25.640 [2024-10-14 17:29:22.483304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946727 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.640 [2024-10-14 17:29:22.483336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.640 [2024-10-14 17:29:22.483417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.640 [2024-10-14 17:29:22.483436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.640 [2024-10-14 17:29:22.483520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.640 [2024-10-14 17:29:22.483539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.640 [2024-10-14 17:29:22.483630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7271035106627151847 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.640 [2024-10-14 17:29:22.483650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.640 [2024-10-14 17:29:22.483745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.640 [2024-10-14 17:29:22.483763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.640 #34 NEW cov: 12514 ft: 14892 corp: 18/1388b lim: 105 exec/s: 34 rss: 75Mb L: 105/105 MS: 1 ShuffleBytes- 00:12:25.640 [2024-10-14 17:29:22.533417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946726 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.533450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.533538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.533557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.533643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.533661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.533749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.533770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.533864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.533883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.641 #35 NEW cov: 12514 ft: 14925 corp: 19/1493b lim: 105 exec/s: 35 rss: 75Mb L: 105/105 MS: 1 CrossOver- 00:12:25.641 [2024-10-14 17:29:22.603667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921887946727 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.603699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.603780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.603798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.603871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.603888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.603982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7271035106627151847 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.603999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.604097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.604119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:25.641 #36 NEW cov: 12514 ft: 14950 corp: 20/1598b lim: 105 exec/s: 36 rss: 75Mb L: 105/105 MS: 1 CrossOver- 00:12:25.641 [2024-10-14 17:29:22.673081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.673127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.673247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.673268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.641 #37 NEW cov: 12514 ft: 14983 corp: 21/1653b lim: 105 exec/s: 37 rss: 75Mb L: 55/105 MS: 1 EraseBytes- 00:12:25.641 [2024-10-14 17:29:22.723297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13165911455405881014 len:46775 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.723328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.641 [2024-10-14 17:29:22.723399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13165911456529954486 len:46775 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.641 [2024-10-14 17:29:22.723415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.900 #41 NEW cov: 12514 ft: 15000 corp: 22/1709b lim: 105 exec/s: 41 rss: 75Mb L: 56/105 MS: 4 ShuffleBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:12:25.900 [2024-10-14 17:29:22.773727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.773758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.773841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.773863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.773912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710336370885257191 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.773928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.900 #42 NEW cov: 12514 ft: 15010 corp: 23/1792b lim: 105 exec/s: 42 rss: 75Mb L: 83/105 MS: 1 CopyPart- 00:12:25.900 [2024-10-14 17:29:22.843668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.843701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.843790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.843804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.900 #43 NEW cov: 12514 ft: 15022 corp: 24/1837b lim: 105 exec/s: 43 rss: 75Mb L: 45/105 MS: 1 ChangeBit- 00:12:25.900 [2024-10-14 17:29:22.894071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.894099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.894180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.894197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.894282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711265 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.894303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.900 #44 NEW cov: 12514 ft: 15075 corp: 25/1910b lim: 105 exec/s: 44 rss: 75Mb L: 73/105 MS: 1 ChangeByte- 00:12:25.900 [2024-10-14 17:29:22.944613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.944643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.944738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59147 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.944761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.944844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.944863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:25.900 [2024-10-14 17:29:22.944959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:25.900 [2024-10-14 17:29:22.944974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:25.900 #45 NEW cov: 12514 ft: 15116 corp: 26/2004b lim: 105 exec/s: 45 rss: 75Mb L: 94/105 MS: 1 CrossOver- 00:12:26.159 [2024-10-14 17:29:23.014261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59112 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.014292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.159 [2024-10-14 17:29:23.014352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579908415842279 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.014371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.159 #46 NEW cov: 12514 ft: 15127 corp: 27/2049b lim: 105 exec/s: 46 rss: 75Mb L: 45/105 MS: 1 ChangeBit- 00:12:26.159 [2024-10-14 17:29:23.084772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.084800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.159 [2024-10-14 17:29:23.084895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.084910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.159 #47 NEW cov: 12514 ft: 15140 corp: 28/2091b lim: 105 exec/s: 47 rss: 75Mb L: 42/105 MS: 1 EraseBytes- 00:12:26.159 [2024-10-14 17:29:23.134969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921873429735 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.134997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.159 [2024-10-14 17:29:23.135071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.135091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.159 #48 NEW cov: 12514 ft: 15153 corp: 29/2136b lim: 105 exec/s: 48 rss: 75Mb L: 45/105 MS: 1 ShuffleBytes- 00:12:26.159 [2024-10-14 17:29:23.185770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579921879361511 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.185798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.159 [2024-10-14 17:29:23.185881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.185899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.159 [2024-10-14 17:29:23.185980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710578976407939047 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.185996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.159 [2024-10-14 17:29:23.186093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.159 [2024-10-14 17:29:23.186115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:26.159 #49 NEW cov: 12514 ft: 15157 corp: 30/2220b lim: 105 exec/s: 24 rss: 75Mb L: 84/105 MS: 1 InsertByte- 00:12:26.159 #49 DONE cov: 12514 ft: 15157 corp: 30/2220b lim: 105 exec/s: 24 rss: 75Mb 00:12:26.159 ###### Recommended dictionary. ###### 00:12:26.159 "\377\377\377\036" # Uses: 0 00:12:26.159 ###### End of recommended dictionary. ###### 00:12:26.159 Done 49 runs in 2 second(s) 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:26.418 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:26.419 17:29:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:12:26.419 [2024-10-14 17:29:23.382199] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:26.419 [2024-10-14 17:29:23.382265] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109226 ] 00:12:26.677 [2024-10-14 17:29:23.651036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.677 [2024-10-14 17:29:23.700254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.677 [2024-10-14 17:29:23.759368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.937 [2024-10-14 17:29:23.775533] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:12:26.937 INFO: Running with entropic power schedule (0xFF, 100). 00:12:26.937 INFO: Seed: 1846386899 00:12:26.937 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:26.937 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:26.937 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:12:26.937 INFO: A corpus is not provided, starting from an empty corpus 00:12:26.937 #2 INITED exec/s: 0 rss: 66Mb 00:12:26.937 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:26.937 This may also happen if the target rejected all inputs we tried so far 00:12:26.937 [2024-10-14 17:29:23.842910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.937 [2024-10-14 17:29:23.842956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:26.937 [2024-10-14 17:29:23.843081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.937 [2024-10-14 17:29:23.843108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:26.937 [2024-10-14 17:29:23.843218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.937 [2024-10-14 17:29:23.843247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:26.937 [2024-10-14 17:29:23.843356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:26.937 [2024-10-14 17:29:23.843378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.196 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:12:27.196 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:27.196 #17 NEW cov: 12308 ft: 12309 corp: 2/112b lim: 120 exec/s: 0 rss: 73Mb L: 111/111 MS: 5 ChangeByte-ChangeByte-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:12:27.196 [2024-10-14 17:29:24.203410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.196 [2024-10-14 17:29:24.203455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.196 #19 NEW cov: 12421 ft: 13826 corp: 3/140b lim: 120 exec/s: 0 rss: 74Mb L: 28/111 MS: 2 ChangeByte-InsertRepeatedBytes- 00:12:27.196 [2024-10-14 17:29:24.263769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.196 [2024-10-14 17:29:24.263798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.459 #20 NEW cov: 12427 ft: 14131 corp: 4/168b lim: 120 exec/s: 0 rss: 74Mb L: 28/111 MS: 1 ChangeBit- 00:12:27.459 [2024-10-14 17:29:24.335292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.335322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.459 [2024-10-14 17:29:24.335396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12367626344947604829 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.335417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.459 [2024-10-14 17:29:24.335485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.335502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.459 [2024-10-14 17:29:24.335589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.335607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:27.459 #21 NEW cov: 12512 ft: 14478 corp: 5/279b lim: 120 exec/s: 0 rss: 74Mb L: 111/111 MS: 1 ChangeBinInt- 00:12:27.459 [2024-10-14 17:29:24.404646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.404680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.459 #22 NEW cov: 12512 ft: 14537 corp: 6/307b lim: 120 exec/s: 0 rss: 74Mb L: 28/111 MS: 1 CopyPart- 00:12:27.459 [2024-10-14 17:29:24.475806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070172049407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.475835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.459 [2024-10-14 17:29:24.475908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.475927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.459 [2024-10-14 17:29:24.475998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.476013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.459 #31 NEW cov: 12512 ft: 14901 corp: 7/379b lim: 120 exec/s: 0 rss: 74Mb L: 72/111 MS: 4 InsertByte-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:12:27.459 [2024-10-14 17:29:24.525377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442765281955856 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.459 [2024-10-14 17:29:24.525408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.459 #33 NEW cov: 12512 ft: 14972 corp: 8/408b lim: 120 exec/s: 0 rss: 74Mb L: 29/111 MS: 2 ChangeBit-CrossOver- 00:12:27.721 [2024-10-14 17:29:24.576148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636072548621661 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.576180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.721 #34 NEW cov: 12521 ft: 15025 corp: 9/452b lim: 120 exec/s: 0 rss: 74Mb L: 44/111 MS: 1 CrossOver- 00:12:27.721 [2024-10-14 17:29:24.626494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070172049407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.626522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.721 [2024-10-14 17:29:24.626589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.626607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.721 [2024-10-14 17:29:24.626683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.626701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.721 #35 NEW cov: 12521 ft: 15105 corp: 10/524b lim: 120 exec/s: 0 rss: 74Mb L: 72/111 MS: 1 ShuffleBytes- 00:12:27.721 [2024-10-14 17:29:24.696844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070172049407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.696873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.721 [2024-10-14 17:29:24.696930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.696951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.721 [2024-10-14 17:29:24.697009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.697030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.721 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:27.721 #36 NEW cov: 12544 ft: 15231 corp: 11/597b lim: 120 exec/s: 0 rss: 74Mb L: 73/111 MS: 1 InsertByte- 00:12:27.721 [2024-10-14 17:29:24.746589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11628229972274972418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.746616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.721 #41 NEW cov: 12544 ft: 15285 corp: 12/622b lim: 120 exec/s: 0 rss: 75Mb L: 25/111 MS: 5 ChangeByte-CMP-CMP-ChangeBit-CMP- DE: "\000\000\000\000\000\000\000\000"-"\017\000\000\000\000\000\000\000"-"\002\241_\305\215)+\000"- 00:12:27.721 [2024-10-14 17:29:24.796883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636072548621661 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.721 [2024-10-14 17:29:24.796913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.981 #42 NEW cov: 12544 ft: 15324 corp: 13/667b lim: 120 exec/s: 42 rss: 75Mb L: 45/111 MS: 1 InsertByte- 00:12:27.981 [2024-10-14 17:29:24.867979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070172049407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-10-14 17:29:24.868009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.981 [2024-10-14 17:29:24.868075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-10-14 17:29:24.868096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:27.981 [2024-10-14 17:29:24.868164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-10-14 17:29:24.868181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:27.981 #43 NEW cov: 12544 ft: 15355 corp: 14/739b lim: 120 exec/s: 43 rss: 75Mb L: 72/111 MS: 1 ShuffleBytes- 00:12:27.981 [2024-10-14 17:29:24.937667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-10-14 17:29:24.937695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.981 #44 NEW cov: 12544 ft: 15368 corp: 15/771b lim: 120 exec/s: 44 rss: 75Mb L: 32/111 MS: 1 CrossOver- 00:12:27.981 [2024-10-14 17:29:25.008031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-10-14 17:29:25.008061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:27.981 #45 NEW cov: 12544 ft: 15391 corp: 16/803b lim: 120 exec/s: 45 rss: 75Mb L: 32/111 MS: 1 CMP- DE: "\377?\000\000"- 00:12:27.981 [2024-10-14 17:29:25.058326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11628229974322843650 len:17 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:27.981 [2024-10-14 17:29:25.058358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.240 #46 NEW cov: 12544 ft: 15453 corp: 17/839b lim: 120 exec/s: 46 rss: 75Mb L: 36/111 MS: 1 PersAutoDict- DE: "\002\241_\305\215)+\000"- 00:12:28.240 [2024-10-14 17:29:25.108401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8795547733302317064 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.108434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.240 #47 NEW cov: 12544 ft: 15464 corp: 18/876b lim: 120 exec/s: 47 rss: 75Mb L: 37/111 MS: 1 CopyPart- 00:12:28.240 [2024-10-14 17:29:25.178598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.178629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.240 #48 NEW cov: 12544 ft: 15494 corp: 19/912b lim: 120 exec/s: 48 rss: 75Mb L: 36/111 MS: 1 PersAutoDict- DE: "\002\241_\305\215)+\000"- 00:12:28.240 [2024-10-14 17:29:25.229910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.229940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.240 [2024-10-14 17:29:25.230013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.230033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:28.240 [2024-10-14 17:29:25.230115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.230134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:28.240 [2024-10-14 17:29:25.230218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:102655579741533 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.230238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:28.240 #49 NEW cov: 12544 ft: 15548 corp: 20/1025b lim: 120 exec/s: 49 rss: 75Mb L: 113/113 MS: 1 CMP- DE: "\000\000"- 00:12:28.240 [2024-10-14 17:29:25.279004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636072548621661 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.240 [2024-10-14 17:29:25.279039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.240 #50 NEW cov: 12544 ft: 15575 corp: 21/1070b lim: 120 exec/s: 50 rss: 75Mb L: 45/113 MS: 1 ChangeBit- 00:12:28.499 [2024-10-14 17:29:25.349999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070172049407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.350035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.499 [2024-10-14 17:29:25.350093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.350112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:28.499 [2024-10-14 17:29:25.350183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.350200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:28.499 #51 NEW cov: 12544 ft: 15579 corp: 22/1150b lim: 120 exec/s: 51 rss: 75Mb L: 80/113 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:12:28.499 [2024-10-14 17:29:25.399849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:8795547733302317064 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.399878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.499 [2024-10-14 17:29:25.399934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636072644218973 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.399952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:28.499 #52 NEW cov: 12544 ft: 15906 corp: 23/1211b lim: 120 exec/s: 52 rss: 75Mb L: 61/113 MS: 1 CrossOver- 00:12:28.499 [2024-10-14 17:29:25.469960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.469989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.499 #53 NEW cov: 12544 ft: 15909 corp: 24/1244b lim: 120 exec/s: 53 rss: 75Mb L: 33/113 MS: 1 InsertByte- 00:12:28.499 [2024-10-14 17:29:25.540889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070172049407 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.540916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.499 [2024-10-14 17:29:25.540990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.541008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:28.499 [2024-10-14 17:29:25.541106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.541126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:28.499 #54 NEW cov: 12544 ft: 15915 corp: 25/1316b lim: 120 exec/s: 54 rss: 75Mb L: 72/113 MS: 1 CMP- DE: "\000\000\000\000"- 00:12:28.499 [2024-10-14 17:29:25.590438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1157442767187611664 len:4113 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.499 [2024-10-14 17:29:25.590469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.758 #55 NEW cov: 12544 ft: 15958 corp: 26/1352b lim: 120 exec/s: 55 rss: 75Mb L: 36/113 MS: 1 CopyPart- 00:12:28.758 [2024-10-14 17:29:25.660720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636072565398877 len:94 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.660748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.758 #58 NEW cov: 12544 ft: 15967 corp: 27/1383b lim: 120 exec/s: 58 rss: 75Mb L: 31/113 MS: 3 PersAutoDict-ChangeBit-CrossOver- DE: "\000\000"- 00:12:28.758 [2024-10-14 17:29:25.711141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636072565398877 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.711170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.758 [2024-10-14 17:29:25.711244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.711263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:28.758 #59 NEW cov: 12544 ft: 15984 corp: 28/1439b lim: 120 exec/s: 59 rss: 75Mb L: 56/113 MS: 1 CrossOver- 00:12:28.758 [2024-10-14 17:29:25.782155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.782185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:28.758 [2024-10-14 17:29:25.782253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23980 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.782272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:28.758 [2024-10-14 17:29:25.782341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6727636073941130589 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.782358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:28.758 [2024-10-14 17:29:25.782439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:102655579741533 len:23902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:28.758 [2024-10-14 17:29:25.782458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:28.758 #60 NEW cov: 12544 ft: 16040 corp: 29/1552b lim: 120 exec/s: 30 rss: 76Mb L: 113/113 MS: 1 ChangeBinInt- 00:12:28.758 #60 DONE cov: 12544 ft: 16040 corp: 29/1552b lim: 120 exec/s: 30 rss: 76Mb 00:12:28.758 ###### Recommended dictionary. ###### 00:12:28.758 "\000\000\000\000\000\000\000\000" # Uses: 0 00:12:28.758 "\017\000\000\000\000\000\000\000" # Uses: 1 00:12:28.758 "\002\241_\305\215)+\000" # Uses: 2 00:12:28.758 "\377?\000\000" # Uses: 0 00:12:28.758 "\000\000" # Uses: 1 00:12:28.758 "\000\000\000\000" # Uses: 0 00:12:28.758 ###### End of recommended dictionary. ###### 00:12:28.758 Done 60 runs in 2 second(s) 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:29.017 17:29:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:12:29.017 [2024-10-14 17:29:25.976252] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:29.017 [2024-10-14 17:29:25.976320] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109582 ] 00:12:29.276 [2024-10-14 17:29:26.162849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.276 [2024-10-14 17:29:26.201117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.276 [2024-10-14 17:29:26.260211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.276 [2024-10-14 17:29:26.276349] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:12:29.276 INFO: Running with entropic power schedule (0xFF, 100). 00:12:29.276 INFO: Seed: 50419407 00:12:29.276 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:29.276 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:29.276 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:12:29.276 INFO: A corpus is not provided, starting from an empty corpus 00:12:29.276 #2 INITED exec/s: 0 rss: 66Mb 00:12:29.276 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:29.276 This may also happen if the target rejected all inputs we tried so far 00:12:29.276 [2024-10-14 17:29:26.324145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:29.276 [2024-10-14 17:29:26.324189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:29.276 [2024-10-14 17:29:26.324238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:29.276 [2024-10-14 17:29:26.324255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:29.276 [2024-10-14 17:29:26.324284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:29.276 [2024-10-14 17:29:26.324299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:29.276 [2024-10-14 17:29:26.324329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:29.276 [2024-10-14 17:29:26.324344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:29.794 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:12:29.794 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:29.794 #3 NEW cov: 12250 ft: 12239 corp: 2/94b lim: 100 exec/s: 0 rss: 74Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:12:29.794 [2024-10-14 17:29:26.695046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:29.794 [2024-10-14 17:29:26.695088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:29.794 [2024-10-14 17:29:26.695123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:29.794 [2024-10-14 17:29:26.695140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:29.794 [2024-10-14 17:29:26.695170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:29.794 [2024-10-14 17:29:26.695186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:29.794 #4 NEW cov: 12364 ft: 13046 corp: 3/168b lim: 100 exec/s: 0 rss: 74Mb L: 74/93 MS: 1 EraseBytes- 00:12:29.794 [2024-10-14 17:29:26.795198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:29.794 [2024-10-14 17:29:26.795234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:29.794 [2024-10-14 17:29:26.795268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:29.794 [2024-10-14 17:29:26.795285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:29.794 [2024-10-14 17:29:26.795314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:29.794 [2024-10-14 17:29:26.795329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:29.794 #5 NEW cov: 12370 ft: 13266 corp: 4/242b lim: 100 exec/s: 0 rss: 74Mb L: 74/93 MS: 1 CopyPart- 00:12:29.794 [2024-10-14 17:29:26.885392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:29.794 [2024-10-14 17:29:26.885424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:29.794 [2024-10-14 17:29:26.885457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:29.794 [2024-10-14 17:29:26.885474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:29.794 [2024-10-14 17:29:26.885505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:29.794 [2024-10-14 17:29:26.885521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.053 #6 NEW cov: 12455 ft: 13715 corp: 5/316b lim: 100 exec/s: 0 rss: 74Mb L: 74/93 MS: 1 ShuffleBytes- 00:12:30.053 [2024-10-14 17:29:26.975519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.053 [2024-10-14 17:29:26.975549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.053 [2024-10-14 17:29:26.975583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.053 [2024-10-14 17:29:26.975599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.053 #7 NEW cov: 12455 ft: 14253 corp: 6/364b lim: 100 exec/s: 0 rss: 74Mb L: 48/93 MS: 1 CrossOver- 00:12:30.053 [2024-10-14 17:29:27.035769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.053 [2024-10-14 17:29:27.035799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.053 [2024-10-14 17:29:27.035830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.053 [2024-10-14 17:29:27.035846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.053 [2024-10-14 17:29:27.035877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.053 [2024-10-14 17:29:27.035892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.053 [2024-10-14 17:29:27.035920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:30.053 [2024-10-14 17:29:27.035934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:30.053 #8 NEW cov: 12455 ft: 14303 corp: 7/459b lim: 100 exec/s: 0 rss: 74Mb L: 95/95 MS: 1 CopyPart- 00:12:30.053 [2024-10-14 17:29:27.095804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.053 [2024-10-14 17:29:27.095836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.053 [2024-10-14 17:29:27.095885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.053 [2024-10-14 17:29:27.095902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.053 #9 NEW cov: 12455 ft: 14341 corp: 8/513b lim: 100 exec/s: 0 rss: 74Mb L: 54/95 MS: 1 EraseBytes- 00:12:30.312 [2024-10-14 17:29:27.155953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.312 [2024-10-14 17:29:27.155981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.156034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.312 [2024-10-14 17:29:27.156052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.312 #10 NEW cov: 12455 ft: 14424 corp: 9/562b lim: 100 exec/s: 0 rss: 74Mb L: 49/95 MS: 1 EraseBytes- 00:12:30.312 [2024-10-14 17:29:27.206141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.312 [2024-10-14 17:29:27.206169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.206215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.312 [2024-10-14 17:29:27.206231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.206261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.312 [2024-10-14 17:29:27.206276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.312 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:30.312 #11 NEW cov: 12478 ft: 14526 corp: 10/636b lim: 100 exec/s: 0 rss: 74Mb L: 74/95 MS: 1 ChangeBinInt- 00:12:30.312 [2024-10-14 17:29:27.296399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.312 [2024-10-14 17:29:27.296428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.296475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.312 [2024-10-14 17:29:27.296491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.296521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.312 [2024-10-14 17:29:27.296536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.312 #12 NEW cov: 12478 ft: 14557 corp: 11/710b lim: 100 exec/s: 12 rss: 74Mb L: 74/95 MS: 1 ChangeByte- 00:12:30.312 [2024-10-14 17:29:27.346552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.312 [2024-10-14 17:29:27.346580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.346627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.312 [2024-10-14 17:29:27.346644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.346674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.312 [2024-10-14 17:29:27.346689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.312 [2024-10-14 17:29:27.346722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:30.312 [2024-10-14 17:29:27.346737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:30.312 #13 NEW cov: 12478 ft: 14620 corp: 12/792b lim: 100 exec/s: 13 rss: 74Mb L: 82/95 MS: 1 InsertRepeatedBytes- 00:12:30.571 [2024-10-14 17:29:27.406667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.571 [2024-10-14 17:29:27.406696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.406729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.571 [2024-10-14 17:29:27.406746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.571 #14 NEW cov: 12478 ft: 14642 corp: 13/841b lim: 100 exec/s: 14 rss: 74Mb L: 49/95 MS: 1 ChangeBit- 00:12:30.571 [2024-10-14 17:29:27.496976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.571 [2024-10-14 17:29:27.497006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.497044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.571 [2024-10-14 17:29:27.497062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.497092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.571 [2024-10-14 17:29:27.497107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.497135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:30.571 [2024-10-14 17:29:27.497150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:30.571 #15 NEW cov: 12478 ft: 14687 corp: 14/936b lim: 100 exec/s: 15 rss: 74Mb L: 95/95 MS: 1 ChangeBit- 00:12:30.571 [2024-10-14 17:29:27.587173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.571 [2024-10-14 17:29:27.587202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.587234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.571 [2024-10-14 17:29:27.587250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.587280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.571 [2024-10-14 17:29:27.587295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.571 #16 NEW cov: 12478 ft: 14694 corp: 15/1010b lim: 100 exec/s: 16 rss: 74Mb L: 74/95 MS: 1 ChangeBit- 00:12:30.571 [2024-10-14 17:29:27.647327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.571 [2024-10-14 17:29:27.647356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.647388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.571 [2024-10-14 17:29:27.647404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.571 [2024-10-14 17:29:27.647434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.571 [2024-10-14 17:29:27.647457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.831 #17 NEW cov: 12478 ft: 14726 corp: 16/1084b lim: 100 exec/s: 17 rss: 74Mb L: 74/95 MS: 1 ChangeByte- 00:12:30.831 [2024-10-14 17:29:27.737553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.831 [2024-10-14 17:29:27.737582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.831 [2024-10-14 17:29:27.737630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.831 [2024-10-14 17:29:27.737646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.831 [2024-10-14 17:29:27.737676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.831 [2024-10-14 17:29:27.737691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.831 #18 NEW cov: 12478 ft: 14806 corp: 17/1158b lim: 100 exec/s: 18 rss: 75Mb L: 74/95 MS: 1 EraseBytes- 00:12:30.831 [2024-10-14 17:29:27.827790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.831 [2024-10-14 17:29:27.827821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.831 [2024-10-14 17:29:27.827854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.831 [2024-10-14 17:29:27.827870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.831 [2024-10-14 17:29:27.827900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.831 [2024-10-14 17:29:27.827916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:30.831 #19 NEW cov: 12478 ft: 14830 corp: 18/1232b lim: 100 exec/s: 19 rss: 75Mb L: 74/95 MS: 1 ChangeBinInt- 00:12:30.831 [2024-10-14 17:29:27.918065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:30.831 [2024-10-14 17:29:27.918095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:30.831 [2024-10-14 17:29:27.918128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:30.831 [2024-10-14 17:29:27.918145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:30.831 [2024-10-14 17:29:27.918175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:30.831 [2024-10-14 17:29:27.918191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:31.101 #20 NEW cov: 12478 ft: 14837 corp: 19/1306b lim: 100 exec/s: 20 rss: 75Mb L: 74/95 MS: 1 ShuffleBytes- 00:12:31.101 [2024-10-14 17:29:27.968175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:31.101 [2024-10-14 17:29:27.968203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:31.101 [2024-10-14 17:29:27.968249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:31.101 [2024-10-14 17:29:27.968267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:31.101 [2024-10-14 17:29:27.968297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:31.101 [2024-10-14 17:29:27.968312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:31.101 [2024-10-14 17:29:27.968340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:12:31.101 [2024-10-14 17:29:27.968360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:31.101 #21 NEW cov: 12478 ft: 14865 corp: 20/1401b lim: 100 exec/s: 21 rss: 75Mb L: 95/95 MS: 1 ShuffleBytes- 00:12:31.101 [2024-10-14 17:29:28.028207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:31.101 [2024-10-14 17:29:28.028236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:31.101 #22 NEW cov: 12478 ft: 15221 corp: 21/1439b lim: 100 exec/s: 22 rss: 75Mb L: 38/95 MS: 1 EraseBytes- 00:12:31.101 [2024-10-14 17:29:28.118563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:31.101 [2024-10-14 17:29:28.118592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:31.101 [2024-10-14 17:29:28.118624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:31.101 [2024-10-14 17:29:28.118640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:31.101 [2024-10-14 17:29:28.118670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:31.101 [2024-10-14 17:29:28.118686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:31.101 #23 NEW cov: 12478 ft: 15227 corp: 22/1513b lim: 100 exec/s: 23 rss: 75Mb L: 74/95 MS: 1 CopyPart- 00:12:31.101 [2024-10-14 17:29:28.178696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:31.102 [2024-10-14 17:29:28.178724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:31.102 [2024-10-14 17:29:28.178770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:31.102 [2024-10-14 17:29:28.178786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:31.102 [2024-10-14 17:29:28.178816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:31.102 [2024-10-14 17:29:28.178831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:31.362 #24 NEW cov: 12478 ft: 15234 corp: 23/1591b lim: 100 exec/s: 24 rss: 75Mb L: 78/95 MS: 1 InsertRepeatedBytes- 00:12:31.362 [2024-10-14 17:29:28.238872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:12:31.362 [2024-10-14 17:29:28.238901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:31.362 [2024-10-14 17:29:28.238948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:12:31.362 [2024-10-14 17:29:28.238965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:31.362 [2024-10-14 17:29:28.238996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:12:31.362 [2024-10-14 17:29:28.239012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:31.362 #25 NEW cov: 12478 ft: 15263 corp: 24/1665b lim: 100 exec/s: 12 rss: 75Mb L: 74/95 MS: 1 ChangeBit- 00:12:31.362 #25 DONE cov: 12478 ft: 15263 corp: 24/1665b lim: 100 exec/s: 12 rss: 75Mb 00:12:31.362 Done 25 runs in 2 second(s) 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:31.362 17:29:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:12:31.621 [2024-10-14 17:29:28.474616] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:31.622 [2024-10-14 17:29:28.474689] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109935 ] 00:12:31.622 [2024-10-14 17:29:28.663755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.622 [2024-10-14 17:29:28.701907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.881 [2024-10-14 17:29:28.760978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.881 [2024-10-14 17:29:28.777136] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:12:31.881 INFO: Running with entropic power schedule (0xFF, 100). 00:12:31.881 INFO: Seed: 2553422025 00:12:31.881 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:31.881 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:31.881 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:12:31.881 INFO: A corpus is not provided, starting from an empty corpus 00:12:31.881 #2 INITED exec/s: 0 rss: 66Mb 00:12:31.881 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:31.881 This may also happen if the target rejected all inputs we tried so far 00:12:31.881 [2024-10-14 17:29:28.821982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:31.881 [2024-10-14 17:29:28.822017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:31.881 [2024-10-14 17:29:28.822075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:12:31.881 [2024-10-14 17:29:28.822094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.140 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:12:32.140 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:32.140 #20 NEW cov: 12229 ft: 12228 corp: 2/25b lim: 50 exec/s: 0 rss: 74Mb L: 24/24 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:12:32.140 [2024-10-14 17:29:29.192920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.140 [2024-10-14 17:29:29.192961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.140 [2024-10-14 17:29:29.193011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12901679104 len:1 00:12:32.140 [2024-10-14 17:29:29.193037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.400 #21 NEW cov: 12342 ft: 12834 corp: 3/49b lim: 50 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ChangeBinInt- 00:12:32.400 [2024-10-14 17:29:29.283152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.400 [2024-10-14 17:29:29.283187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.400 [2024-10-14 17:29:29.283219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:12:32.400 [2024-10-14 17:29:29.283237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.400 [2024-10-14 17:29:29.283267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:12:32.400 [2024-10-14 17:29:29.283284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:32.400 [2024-10-14 17:29:29.283313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11196979740672 len:1 00:12:32.400 [2024-10-14 17:29:29.283330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:32.400 #22 NEW cov: 12348 ft: 13535 corp: 4/92b lim: 50 exec/s: 0 rss: 74Mb L: 43/43 MS: 1 CrossOver- 00:12:32.400 [2024-10-14 17:29:29.343163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.400 [2024-10-14 17:29:29.343193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.400 [2024-10-14 17:29:29.343241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16383744 len:1 00:12:32.400 [2024-10-14 17:29:29.343260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.400 #23 NEW cov: 12433 ft: 13831 corp: 5/116b lim: 50 exec/s: 0 rss: 74Mb L: 24/43 MS: 1 ChangeBinInt- 00:12:32.400 [2024-10-14 17:29:29.403309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.400 [2024-10-14 17:29:29.403339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.400 [2024-10-14 17:29:29.403386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12901679104 len:4 00:12:32.400 [2024-10-14 17:29:29.403405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.400 #24 NEW cov: 12433 ft: 13882 corp: 6/144b lim: 50 exec/s: 0 rss: 74Mb L: 28/43 MS: 1 CMP- DE: "\003\000\000\000"- 00:12:32.660 [2024-10-14 17:29:29.493601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.660 [2024-10-14 17:29:29.493635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.493680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23068672 len:769 00:12:32.660 [2024-10-14 17:29:29.493701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.660 #25 NEW cov: 12433 ft: 13965 corp: 7/169b lim: 50 exec/s: 0 rss: 74Mb L: 25/43 MS: 1 InsertByte- 00:12:32.660 [2024-10-14 17:29:29.553717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.660 [2024-10-14 17:29:29.553747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.553795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12901679106 len:1 00:12:32.660 [2024-10-14 17:29:29.553813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.660 #26 NEW cov: 12433 ft: 14069 corp: 8/193b lim: 50 exec/s: 0 rss: 74Mb L: 24/43 MS: 1 ChangeBit- 00:12:32.660 [2024-10-14 17:29:29.613895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:32769 00:12:32.660 [2024-10-14 17:29:29.613926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.613975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23068672 len:769 00:12:32.660 [2024-10-14 17:29:29.613993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.660 #27 NEW cov: 12433 ft: 14147 corp: 9/218b lim: 50 exec/s: 0 rss: 74Mb L: 25/43 MS: 1 ChangeBit- 00:12:32.660 [2024-10-14 17:29:29.704267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.660 [2024-10-14 17:29:29.704298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.704330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:12:32.660 [2024-10-14 17:29:29.704348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.704378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:12:32.660 [2024-10-14 17:29:29.704395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.704422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:12:32.660 [2024-10-14 17:29:29.704439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:32.660 [2024-10-14 17:29:29.704467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:0 len:2608 00:12:32.660 [2024-10-14 17:29:29.704483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:32.660 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:32.660 #28 NEW cov: 12450 ft: 14273 corp: 10/268b lim: 50 exec/s: 0 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:12:32.920 [2024-10-14 17:29:29.764256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.920 [2024-10-14 17:29:29.764287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.920 [2024-10-14 17:29:29.764335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17605087723521 len:1 00:12:32.920 [2024-10-14 17:29:29.764358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.920 #29 NEW cov: 12450 ft: 14348 corp: 11/292b lim: 50 exec/s: 0 rss: 74Mb L: 24/50 MS: 1 CMP- DE: "\001\000\000\020"- 00:12:32.920 [2024-10-14 17:29:29.814464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:196608 len:1 00:12:32.920 [2024-10-14 17:29:29.814494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.920 [2024-10-14 17:29:29.814527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:12:32.920 [2024-10-14 17:29:29.814545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.920 [2024-10-14 17:29:29.814575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:12:32.920 [2024-10-14 17:29:29.814592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:32.920 #30 NEW cov: 12450 ft: 14578 corp: 12/324b lim: 50 exec/s: 30 rss: 74Mb L: 32/50 MS: 1 CMP- DE: "\003\000\000\000\000\000\000\000"- 00:12:32.920 [2024-10-14 17:29:29.874569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:32.920 [2024-10-14 17:29:29.874599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.920 [2024-10-14 17:29:29.874633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12901679106 len:1 00:12:32.920 [2024-10-14 17:29:29.874650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:32.920 #31 NEW cov: 12450 ft: 14610 corp: 13/348b lim: 50 exec/s: 31 rss: 74Mb L: 24/50 MS: 1 CopyPart- 00:12:32.920 [2024-10-14 17:29:29.964776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1073741824 len:1 00:12:32.920 [2024-10-14 17:29:29.964806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:32.920 [2024-10-14 17:29:29.964853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16383744 len:1 00:12:32.920 [2024-10-14 17:29:29.964871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.179 #32 NEW cov: 12450 ft: 14625 corp: 14/372b lim: 50 exec/s: 32 rss: 74Mb L: 24/50 MS: 1 ChangeBit- 00:12:33.179 [2024-10-14 17:29:30.055116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:123 len:1 00:12:33.179 [2024-10-14 17:29:30.055159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.179 [2024-10-14 17:29:30.055194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23068672 len:769 00:12:33.179 [2024-10-14 17:29:30.055213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.179 #33 NEW cov: 12450 ft: 14651 corp: 15/397b lim: 50 exec/s: 33 rss: 74Mb L: 25/50 MS: 1 ChangeByte- 00:12:33.179 [2024-10-14 17:29:30.115250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1069446856704 len:1 00:12:33.179 [2024-10-14 17:29:30.115293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.179 [2024-10-14 17:29:30.115329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12901679104 len:4 00:12:33.179 [2024-10-14 17:29:30.115348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.179 #34 NEW cov: 12450 ft: 14672 corp: 16/425b lim: 50 exec/s: 34 rss: 74Mb L: 28/50 MS: 1 ChangeBinInt- 00:12:33.179 [2024-10-14 17:29:30.205401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:70367645739778048 len:1 00:12:33.179 [2024-10-14 17:29:30.205433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.438 #35 NEW cov: 12450 ft: 14975 corp: 17/443b lim: 50 exec/s: 35 rss: 74Mb L: 18/50 MS: 1 EraseBytes- 00:12:33.438 [2024-10-14 17:29:30.295706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:33.438 [2024-10-14 17:29:30.295737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.438 [2024-10-14 17:29:30.295783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:12:33.438 [2024-10-14 17:29:30.295801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.438 [2024-10-14 17:29:30.295831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:23068672 len:769 00:12:33.438 [2024-10-14 17:29:30.295848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:33.438 #36 NEW cov: 12450 ft: 14988 corp: 18/473b lim: 50 exec/s: 36 rss: 74Mb L: 30/50 MS: 1 CrossOver- 00:12:33.438 [2024-10-14 17:29:30.355827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:33.438 [2024-10-14 17:29:30.355859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.438 [2024-10-14 17:29:30.355893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23068672 len:769 00:12:33.438 [2024-10-14 17:29:30.355913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.438 #37 NEW cov: 12450 ft: 14999 corp: 19/502b lim: 50 exec/s: 37 rss: 74Mb L: 29/50 MS: 1 PersAutoDict- DE: "\001\000\000\020"- 00:12:33.438 [2024-10-14 17:29:30.405901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:70367645739778048 len:1 00:12:33.438 [2024-10-14 17:29:30.405931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.438 #38 NEW cov: 12450 ft: 15114 corp: 20/520b lim: 50 exec/s: 38 rss: 75Mb L: 18/50 MS: 1 ChangeBinInt- 00:12:33.438 [2024-10-14 17:29:30.496154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:70367645739778048 len:1 00:12:33.438 [2024-10-14 17:29:30.496186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.697 #39 NEW cov: 12450 ft: 15154 corp: 21/539b lim: 50 exec/s: 39 rss: 75Mb L: 19/50 MS: 1 InsertByte- 00:12:33.697 [2024-10-14 17:29:30.586429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:33.697 [2024-10-14 17:29:30.586460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.697 [2024-10-14 17:29:30.586509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12901679104 len:1 00:12:33.697 [2024-10-14 17:29:30.586527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.697 #40 NEW cov: 12450 ft: 15181 corp: 22/563b lim: 50 exec/s: 40 rss: 75Mb L: 24/50 MS: 1 ShuffleBytes- 00:12:33.697 [2024-10-14 17:29:30.636569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:12:33.697 [2024-10-14 17:29:30.636599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.697 [2024-10-14 17:29:30.636651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:8606711810 len:1 00:12:33.697 [2024-10-14 17:29:30.636670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.697 #41 NEW cov: 12457 ft: 15211 corp: 23/587b lim: 50 exec/s: 41 rss: 75Mb L: 24/50 MS: 1 CopyPart- 00:12:33.697 [2024-10-14 17:29:30.726757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:24577 00:12:33.697 [2024-10-14 17:29:30.726787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.697 [2024-10-14 17:29:30.726835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9223372036871553024 len:769 00:12:33.697 [2024-10-14 17:29:30.726853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.957 #42 NEW cov: 12457 ft: 15281 corp: 24/612b lim: 50 exec/s: 42 rss: 75Mb L: 25/50 MS: 1 ShuffleBytes- 00:12:33.957 [2024-10-14 17:29:30.816992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1073741824 len:250 00:12:33.957 [2024-10-14 17:29:30.817022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:33.957 [2024-10-14 17:29:30.817077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:12:33.957 [2024-10-14 17:29:30.817096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:33.957 #43 NEW cov: 12457 ft: 15317 corp: 25/634b lim: 50 exec/s: 21 rss: 75Mb L: 22/50 MS: 1 EraseBytes- 00:12:33.957 #43 DONE cov: 12457 ft: 15317 corp: 25/634b lim: 50 exec/s: 21 rss: 75Mb 00:12:33.957 ###### Recommended dictionary. ###### 00:12:33.957 "\003\000\000\000" # Uses: 0 00:12:33.957 "\001\000\000\020" # Uses: 1 00:12:33.957 "\003\000\000\000\000\000\000\000" # Uses: 0 00:12:33.957 ###### End of recommended dictionary. ###### 00:12:33.957 Done 43 runs in 2 second(s) 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:33.957 17:29:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:12:33.957 [2024-10-14 17:29:31.009620] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:33.957 [2024-10-14 17:29:31.009692] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110298 ] 00:12:34.216 [2024-10-14 17:29:31.200594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.216 [2024-10-14 17:29:31.238540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.216 [2024-10-14 17:29:31.297528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.476 [2024-10-14 17:29:31.313668] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:34.476 INFO: Running with entropic power schedule (0xFF, 100). 00:12:34.476 INFO: Seed: 793451596 00:12:34.476 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:34.476 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:34.476 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:12:34.476 INFO: A corpus is not provided, starting from an empty corpus 00:12:34.476 #2 INITED exec/s: 0 rss: 66Mb 00:12:34.476 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:34.476 This may also happen if the target rejected all inputs we tried so far 00:12:34.476 [2024-10-14 17:29:31.369598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.476 [2024-10-14 17:29:31.369630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.476 [2024-10-14 17:29:31.369686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.476 [2024-10-14 17:29:31.369702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.476 [2024-10-14 17:29:31.369753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.476 [2024-10-14 17:29:31.369768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.476 [2024-10-14 17:29:31.369821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:34.476 [2024-10-14 17:29:31.369836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:34.735 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:12:34.735 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:34.735 #8 NEW cov: 12287 ft: 12283 corp: 2/90b lim: 90 exec/s: 0 rss: 74Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:12:34.735 [2024-10-14 17:29:31.710405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.735 [2024-10-14 17:29:31.710467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.735 [2024-10-14 17:29:31.710550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.735 [2024-10-14 17:29:31.710580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.735 [2024-10-14 17:29:31.710659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.735 [2024-10-14 17:29:31.710694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.735 #17 NEW cov: 12400 ft: 13161 corp: 3/157b lim: 90 exec/s: 0 rss: 74Mb L: 67/89 MS: 4 ChangeByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:12:34.735 [2024-10-14 17:29:31.760450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.735 [2024-10-14 17:29:31.760480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.735 [2024-10-14 17:29:31.760530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.735 [2024-10-14 17:29:31.760546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.735 [2024-10-14 17:29:31.760598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.736 [2024-10-14 17:29:31.760614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.736 [2024-10-14 17:29:31.760667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:34.736 [2024-10-14 17:29:31.760683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:34.736 #18 NEW cov: 12406 ft: 13495 corp: 4/246b lim: 90 exec/s: 0 rss: 74Mb L: 89/89 MS: 1 ChangeBinInt- 00:12:34.736 [2024-10-14 17:29:31.820513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.736 [2024-10-14 17:29:31.820540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.736 [2024-10-14 17:29:31.820578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.736 [2024-10-14 17:29:31.820594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.736 [2024-10-14 17:29:31.820650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.736 [2024-10-14 17:29:31.820666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.995 #19 NEW cov: 12491 ft: 13833 corp: 5/306b lim: 90 exec/s: 0 rss: 74Mb L: 60/89 MS: 1 CrossOver- 00:12:34.995 [2024-10-14 17:29:31.880570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.995 [2024-10-14 17:29:31.880597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:31.880663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.995 [2024-10-14 17:29:31.880680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:31.880737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.995 [2024-10-14 17:29:31.880752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.995 #20 NEW cov: 12491 ft: 13961 corp: 6/366b lim: 90 exec/s: 0 rss: 74Mb L: 60/89 MS: 1 ChangeBit- 00:12:34.995 [2024-10-14 17:29:31.940777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.995 [2024-10-14 17:29:31.940805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:31.940840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.995 [2024-10-14 17:29:31.940858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:31.940913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.995 [2024-10-14 17:29:31.940929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.995 #21 NEW cov: 12491 ft: 14077 corp: 7/433b lim: 90 exec/s: 0 rss: 74Mb L: 67/89 MS: 1 ChangeBinInt- 00:12:34.995 [2024-10-14 17:29:31.980893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.995 [2024-10-14 17:29:31.980920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:31.980968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.995 [2024-10-14 17:29:31.980984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:31.981039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.995 [2024-10-14 17:29:31.981071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.995 #22 NEW cov: 12491 ft: 14236 corp: 8/500b lim: 90 exec/s: 0 rss: 74Mb L: 67/89 MS: 1 CMP- DE: "\000\000\000\002"- 00:12:34.995 [2024-10-14 17:29:32.041249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:34.995 [2024-10-14 17:29:32.041276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:32.041353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:34.995 [2024-10-14 17:29:32.041369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:32.041424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:34.995 [2024-10-14 17:29:32.041439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:34.995 [2024-10-14 17:29:32.041493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:34.995 [2024-10-14 17:29:32.041509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:34.995 #23 NEW cov: 12491 ft: 14333 corp: 9/586b lim: 90 exec/s: 0 rss: 74Mb L: 86/89 MS: 1 InsertRepeatedBytes- 00:12:35.255 [2024-10-14 17:29:32.101537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.255 [2024-10-14 17:29:32.101564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.101620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.255 [2024-10-14 17:29:32.101636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.101690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.255 [2024-10-14 17:29:32.101705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.101759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.255 [2024-10-14 17:29:32.101775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.101829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:12:35.255 [2024-10-14 17:29:32.101849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:35.255 #24 NEW cov: 12491 ft: 14419 corp: 10/676b lim: 90 exec/s: 0 rss: 74Mb L: 90/90 MS: 1 CMP- DE: "\377\377\377\001"- 00:12:35.255 [2024-10-14 17:29:32.161385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.255 [2024-10-14 17:29:32.161413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.161461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.255 [2024-10-14 17:29:32.161477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.161529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.255 [2024-10-14 17:29:32.161544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.255 #25 NEW cov: 12491 ft: 14462 corp: 11/736b lim: 90 exec/s: 0 rss: 74Mb L: 60/90 MS: 1 ChangeByte- 00:12:35.255 [2024-10-14 17:29:32.221540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.255 [2024-10-14 17:29:32.221567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.255 [2024-10-14 17:29:32.221617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.255 [2024-10-14 17:29:32.221632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.221686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.256 [2024-10-14 17:29:32.221703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.256 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:35.256 #26 NEW cov: 12514 ft: 14472 corp: 12/807b lim: 90 exec/s: 0 rss: 74Mb L: 71/90 MS: 1 PersAutoDict- DE: "\000\000\000\002"- 00:12:35.256 [2024-10-14 17:29:32.261689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.256 [2024-10-14 17:29:32.261716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.261770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.256 [2024-10-14 17:29:32.261787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.261841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.256 [2024-10-14 17:29:32.261857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.256 #27 NEW cov: 12514 ft: 14511 corp: 13/867b lim: 90 exec/s: 0 rss: 74Mb L: 60/90 MS: 1 ChangeByte- 00:12:35.256 [2024-10-14 17:29:32.301931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.256 [2024-10-14 17:29:32.301957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.302022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.256 [2024-10-14 17:29:32.302043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.302099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.256 [2024-10-14 17:29:32.302119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.302186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.256 [2024-10-14 17:29:32.302202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.256 #28 NEW cov: 12514 ft: 14615 corp: 14/956b lim: 90 exec/s: 0 rss: 74Mb L: 89/90 MS: 1 ChangeBit- 00:12:35.256 [2024-10-14 17:29:32.341946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.256 [2024-10-14 17:29:32.341973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.342014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.256 [2024-10-14 17:29:32.342037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.256 [2024-10-14 17:29:32.342095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.256 [2024-10-14 17:29:32.342109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.515 #29 NEW cov: 12514 ft: 14668 corp: 15/1017b lim: 90 exec/s: 29 rss: 74Mb L: 61/90 MS: 1 InsertByte- 00:12:35.515 [2024-10-14 17:29:32.382388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.515 [2024-10-14 17:29:32.382414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.515 [2024-10-14 17:29:32.382472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.515 [2024-10-14 17:29:32.382487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.515 [2024-10-14 17:29:32.382544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.515 [2024-10-14 17:29:32.382559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.515 [2024-10-14 17:29:32.382612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.515 [2024-10-14 17:29:32.382627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.515 [2024-10-14 17:29:32.382681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:12:35.515 [2024-10-14 17:29:32.382695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:35.515 #30 NEW cov: 12514 ft: 14718 corp: 16/1107b lim: 90 exec/s: 30 rss: 75Mb L: 90/90 MS: 1 CopyPart- 00:12:35.515 [2024-10-14 17:29:32.442181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.515 [2024-10-14 17:29:32.442208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.515 [2024-10-14 17:29:32.442257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.516 [2024-10-14 17:29:32.442273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.516 [2024-10-14 17:29:32.442329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.516 [2024-10-14 17:29:32.442345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.516 #31 NEW cov: 12514 ft: 14733 corp: 17/1174b lim: 90 exec/s: 31 rss: 75Mb L: 67/90 MS: 1 CrossOver- 00:12:35.516 [2024-10-14 17:29:32.482006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.516 [2024-10-14 17:29:32.482038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.516 #32 NEW cov: 12514 ft: 15522 corp: 18/1199b lim: 90 exec/s: 32 rss: 75Mb L: 25/90 MS: 1 CrossOver- 00:12:35.516 [2024-10-14 17:29:32.522415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.516 [2024-10-14 17:29:32.522442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.516 [2024-10-14 17:29:32.522492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.516 [2024-10-14 17:29:32.522508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.516 [2024-10-14 17:29:32.522563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.516 [2024-10-14 17:29:32.522578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.516 #33 NEW cov: 12514 ft: 15532 corp: 19/1260b lim: 90 exec/s: 33 rss: 75Mb L: 61/90 MS: 1 ChangeByte- 00:12:35.516 [2024-10-14 17:29:32.582759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.516 [2024-10-14 17:29:32.582786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.516 [2024-10-14 17:29:32.582851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.516 [2024-10-14 17:29:32.582868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.516 [2024-10-14 17:29:32.582923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.516 [2024-10-14 17:29:32.582939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.516 [2024-10-14 17:29:32.582997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.516 [2024-10-14 17:29:32.583015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.775 #34 NEW cov: 12514 ft: 15628 corp: 20/1332b lim: 90 exec/s: 34 rss: 75Mb L: 72/90 MS: 1 InsertByte- 00:12:35.775 [2024-10-14 17:29:32.642891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.775 [2024-10-14 17:29:32.642919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.775 [2024-10-14 17:29:32.642990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.775 [2024-10-14 17:29:32.643007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.775 [2024-10-14 17:29:32.643059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.775 [2024-10-14 17:29:32.643075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.775 [2024-10-14 17:29:32.643129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.775 [2024-10-14 17:29:32.643146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.775 #35 NEW cov: 12514 ft: 15644 corp: 21/1418b lim: 90 exec/s: 35 rss: 75Mb L: 86/90 MS: 1 InsertRepeatedBytes- 00:12:35.775 [2024-10-14 17:29:32.682884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.775 [2024-10-14 17:29:32.682915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.775 [2024-10-14 17:29:32.682970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.775 [2024-10-14 17:29:32.682988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.775 [2024-10-14 17:29:32.683050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.776 [2024-10-14 17:29:32.683065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.776 #36 NEW cov: 12514 ft: 15687 corp: 22/1485b lim: 90 exec/s: 36 rss: 75Mb L: 67/90 MS: 1 ChangeBinInt- 00:12:35.776 [2024-10-14 17:29:32.722998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.776 [2024-10-14 17:29:32.723024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.723069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.776 [2024-10-14 17:29:32.723084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.723140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.776 [2024-10-14 17:29:32.723157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.776 #37 NEW cov: 12514 ft: 15698 corp: 23/1539b lim: 90 exec/s: 37 rss: 75Mb L: 54/90 MS: 1 EraseBytes- 00:12:35.776 [2024-10-14 17:29:32.763315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.776 [2024-10-14 17:29:32.763341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.763412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.776 [2024-10-14 17:29:32.763428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.763482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.776 [2024-10-14 17:29:32.763499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.763554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.776 [2024-10-14 17:29:32.763569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.776 #38 NEW cov: 12514 ft: 15719 corp: 24/1628b lim: 90 exec/s: 38 rss: 75Mb L: 89/90 MS: 1 CMP- DE: "\011\000"- 00:12:35.776 [2024-10-14 17:29:32.803406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.776 [2024-10-14 17:29:32.803432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.803504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.776 [2024-10-14 17:29:32.803521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.803576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.776 [2024-10-14 17:29:32.803592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.803649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.776 [2024-10-14 17:29:32.803665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.776 #39 NEW cov: 12514 ft: 15747 corp: 25/1700b lim: 90 exec/s: 39 rss: 75Mb L: 72/90 MS: 1 ChangeBinInt- 00:12:35.776 [2024-10-14 17:29:32.863734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:35.776 [2024-10-14 17:29:32.863760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.863822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:35.776 [2024-10-14 17:29:32.863838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.863893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:35.776 [2024-10-14 17:29:32.863907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.863962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:35.776 [2024-10-14 17:29:32.863976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:35.776 [2024-10-14 17:29:32.864038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:12:35.776 [2024-10-14 17:29:32.864055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:36.035 #40 NEW cov: 12514 ft: 15765 corp: 26/1790b lim: 90 exec/s: 40 rss: 75Mb L: 90/90 MS: 1 CrossOver- 00:12:36.035 [2024-10-14 17:29:32.923721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.035 [2024-10-14 17:29:32.923748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.035 [2024-10-14 17:29:32.923817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.035 [2024-10-14 17:29:32.923834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.035 [2024-10-14 17:29:32.923887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.035 [2024-10-14 17:29:32.923903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.035 [2024-10-14 17:29:32.923958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:36.035 [2024-10-14 17:29:32.923974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:36.035 #41 NEW cov: 12514 ft: 15872 corp: 27/1876b lim: 90 exec/s: 41 rss: 75Mb L: 86/90 MS: 1 ChangeBinInt- 00:12:36.035 [2024-10-14 17:29:32.963866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.035 [2024-10-14 17:29:32.963893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.035 [2024-10-14 17:29:32.963967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.035 [2024-10-14 17:29:32.963983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.035 [2024-10-14 17:29:32.964040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.036 [2024-10-14 17:29:32.964057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:32.964116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:36.036 [2024-10-14 17:29:32.964132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:36.036 #42 NEW cov: 12514 ft: 15879 corp: 28/1965b lim: 90 exec/s: 42 rss: 75Mb L: 89/90 MS: 1 ShuffleBytes- 00:12:36.036 [2024-10-14 17:29:33.003806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.036 [2024-10-14 17:29:33.003832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.003897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.036 [2024-10-14 17:29:33.003913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.003969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.036 [2024-10-14 17:29:33.003985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.036 #43 NEW cov: 12514 ft: 15895 corp: 29/2026b lim: 90 exec/s: 43 rss: 75Mb L: 61/90 MS: 1 InsertByte- 00:12:36.036 [2024-10-14 17:29:33.064128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.036 [2024-10-14 17:29:33.064156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.064205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.036 [2024-10-14 17:29:33.064221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.064277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.036 [2024-10-14 17:29:33.064292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.064347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:36.036 [2024-10-14 17:29:33.064364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:36.036 #44 NEW cov: 12514 ft: 15909 corp: 30/2112b lim: 90 exec/s: 44 rss: 75Mb L: 86/90 MS: 1 ChangeBit- 00:12:36.036 [2024-10-14 17:29:33.104471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.036 [2024-10-14 17:29:33.104498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.104577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.036 [2024-10-14 17:29:33.104594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.104649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.036 [2024-10-14 17:29:33.104664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.104720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:36.036 [2024-10-14 17:29:33.104737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:36.036 [2024-10-14 17:29:33.104791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:12:36.036 [2024-10-14 17:29:33.104807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:36.295 #45 NEW cov: 12514 ft: 15923 corp: 31/2202b lim: 90 exec/s: 45 rss: 75Mb L: 90/90 MS: 1 ChangeBinInt- 00:12:36.296 [2024-10-14 17:29:33.164149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.296 [2024-10-14 17:29:33.164177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.164224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.296 [2024-10-14 17:29:33.164240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.296 #46 NEW cov: 12514 ft: 16199 corp: 32/2241b lim: 90 exec/s: 46 rss: 76Mb L: 39/90 MS: 1 CrossOver- 00:12:36.296 [2024-10-14 17:29:33.204591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.296 [2024-10-14 17:29:33.204619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.204675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.296 [2024-10-14 17:29:33.204690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.204745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.296 [2024-10-14 17:29:33.204761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.204817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:12:36.296 [2024-10-14 17:29:33.204832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:36.296 #47 NEW cov: 12514 ft: 16234 corp: 33/2323b lim: 90 exec/s: 47 rss: 76Mb L: 82/90 MS: 1 InsertRepeatedBytes- 00:12:36.296 [2024-10-14 17:29:33.244496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.296 [2024-10-14 17:29:33.244523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.244572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.296 [2024-10-14 17:29:33.244589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.244645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.296 [2024-10-14 17:29:33.244660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.296 #48 NEW cov: 12514 ft: 16262 corp: 34/2384b lim: 90 exec/s: 48 rss: 76Mb L: 61/90 MS: 1 ShuffleBytes- 00:12:36.296 [2024-10-14 17:29:33.284629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.296 [2024-10-14 17:29:33.284656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.284723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.296 [2024-10-14 17:29:33.284740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.284797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.296 [2024-10-14 17:29:33.284813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.296 #49 NEW cov: 12514 ft: 16305 corp: 35/2448b lim: 90 exec/s: 49 rss: 76Mb L: 64/90 MS: 1 EraseBytes- 00:12:36.296 [2024-10-14 17:29:33.344875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:12:36.296 [2024-10-14 17:29:33.344903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.344956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:12:36.296 [2024-10-14 17:29:33.344972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:36.296 [2024-10-14 17:29:33.345032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:12:36.296 [2024-10-14 17:29:33.345050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:36.555 #50 NEW cov: 12514 ft: 16307 corp: 36/2513b lim: 90 exec/s: 25 rss: 76Mb L: 65/90 MS: 1 PersAutoDict- DE: "\000\000\000\002"- 00:12:36.555 #50 DONE cov: 12514 ft: 16307 corp: 36/2513b lim: 90 exec/s: 25 rss: 76Mb 00:12:36.555 ###### Recommended dictionary. ###### 00:12:36.555 "\000\000\000\002" # Uses: 2 00:12:36.555 "\377\377\377\001" # Uses: 0 00:12:36.555 "\011\000" # Uses: 0 00:12:36.555 ###### End of recommended dictionary. ###### 00:12:36.555 Done 50 runs in 2 second(s) 00:12:36.555 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:12:36.555 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:36.555 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:36.555 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:12:36.555 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:12:36.555 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:36.556 17:29:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:12:36.556 [2024-10-14 17:29:33.540241] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:36.556 [2024-10-14 17:29:33.540312] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110652 ] 00:12:36.815 [2024-10-14 17:29:33.727401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.815 [2024-10-14 17:29:33.766463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.815 [2024-10-14 17:29:33.825476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.815 [2024-10-14 17:29:33.841632] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:12:36.815 INFO: Running with entropic power schedule (0xFF, 100). 00:12:36.815 INFO: Seed: 3322471013 00:12:36.815 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:36.815 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:36.815 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:12:36.815 INFO: A corpus is not provided, starting from an empty corpus 00:12:36.815 #2 INITED exec/s: 0 rss: 66Mb 00:12:36.815 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:36.815 This may also happen if the target rejected all inputs we tried so far 00:12:36.815 [2024-10-14 17:29:33.897090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:36.815 [2024-10-14 17:29:33.897122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.334 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:12:37.334 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:37.334 #4 NEW cov: 12262 ft: 12251 corp: 2/11b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 2 CrossOver-CMP- DE: "\203\033\210#\223)+\000"- 00:12:37.334 [2024-10-14 17:29:34.238021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.334 [2024-10-14 17:29:34.238092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.334 #9 NEW cov: 12375 ft: 12933 corp: 3/22b lim: 50 exec/s: 0 rss: 74Mb L: 11/11 MS: 5 CrossOver-ShuffleBytes-ShuffleBytes-InsertByte-PersAutoDict- DE: "\203\033\210#\223)+\000"- 00:12:37.334 [2024-10-14 17:29:34.287881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.334 [2024-10-14 17:29:34.287908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.334 #10 NEW cov: 12381 ft: 13231 corp: 4/32b lim: 50 exec/s: 0 rss: 74Mb L: 10/11 MS: 1 CrossOver- 00:12:37.334 [2024-10-14 17:29:34.348076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.334 [2024-10-14 17:29:34.348103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.334 #11 NEW cov: 12466 ft: 13530 corp: 5/42b lim: 50 exec/s: 0 rss: 74Mb L: 10/11 MS: 1 ChangeBinInt- 00:12:37.334 [2024-10-14 17:29:34.408466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.334 [2024-10-14 17:29:34.408493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.334 [2024-10-14 17:29:34.408536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:37.334 [2024-10-14 17:29:34.408552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:37.334 [2024-10-14 17:29:34.408605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:37.334 [2024-10-14 17:29:34.408620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:37.593 #14 NEW cov: 12466 ft: 14380 corp: 6/79b lim: 50 exec/s: 0 rss: 74Mb L: 37/37 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:12:37.593 [2024-10-14 17:29:34.448291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.593 [2024-10-14 17:29:34.448323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.593 #15 NEW cov: 12466 ft: 14533 corp: 7/90b lim: 50 exec/s: 0 rss: 74Mb L: 11/37 MS: 1 ShuffleBytes- 00:12:37.593 [2024-10-14 17:29:34.508445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.593 [2024-10-14 17:29:34.508473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.593 #19 NEW cov: 12466 ft: 14670 corp: 8/107b lim: 50 exec/s: 0 rss: 74Mb L: 17/37 MS: 4 EraseBytes-InsertByte-PersAutoDict-PersAutoDict- DE: "\203\033\210#\223)+\000"-"\203\033\210#\223)+\000"- 00:12:37.593 [2024-10-14 17:29:34.548581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.593 [2024-10-14 17:29:34.548609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.593 #20 NEW cov: 12466 ft: 14752 corp: 9/118b lim: 50 exec/s: 0 rss: 74Mb L: 11/37 MS: 1 ChangeByte- 00:12:37.593 [2024-10-14 17:29:34.609154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.593 [2024-10-14 17:29:34.609183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.593 [2024-10-14 17:29:34.609231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:37.593 [2024-10-14 17:29:34.609247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:37.593 [2024-10-14 17:29:34.609300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:37.593 [2024-10-14 17:29:34.609315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:37.593 [2024-10-14 17:29:34.609369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:37.593 [2024-10-14 17:29:34.609385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:37.593 #23 NEW cov: 12466 ft: 15117 corp: 10/159b lim: 50 exec/s: 0 rss: 74Mb L: 41/41 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:12:37.593 [2024-10-14 17:29:34.648867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.593 [2024-10-14 17:29:34.648896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.593 #24 NEW cov: 12466 ft: 15184 corp: 11/170b lim: 50 exec/s: 0 rss: 74Mb L: 11/41 MS: 1 ChangeByte- 00:12:37.852 [2024-10-14 17:29:34.689292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.852 [2024-10-14 17:29:34.689321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.852 [2024-10-14 17:29:34.689358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:37.852 [2024-10-14 17:29:34.689373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:37.852 [2024-10-14 17:29:34.689427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:37.852 [2024-10-14 17:29:34.689443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:37.852 #25 NEW cov: 12466 ft: 15229 corp: 12/208b lim: 50 exec/s: 0 rss: 74Mb L: 38/41 MS: 1 InsertByte- 00:12:37.852 [2024-10-14 17:29:34.749137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.852 [2024-10-14 17:29:34.749165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.852 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:37.852 #26 NEW cov: 12489 ft: 15295 corp: 13/226b lim: 50 exec/s: 0 rss: 74Mb L: 18/41 MS: 1 PersAutoDict- DE: "\203\033\210#\223)+\000"- 00:12:37.852 [2024-10-14 17:29:34.789260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.853 [2024-10-14 17:29:34.789289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.853 #27 NEW cov: 12489 ft: 15313 corp: 14/237b lim: 50 exec/s: 0 rss: 74Mb L: 11/41 MS: 1 InsertByte- 00:12:37.853 [2024-10-14 17:29:34.849450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.853 [2024-10-14 17:29:34.849477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.853 #33 NEW cov: 12489 ft: 15360 corp: 15/255b lim: 50 exec/s: 33 rss: 74Mb L: 18/41 MS: 1 ShuffleBytes- 00:12:37.853 [2024-10-14 17:29:34.909601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:37.853 [2024-10-14 17:29:34.909628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:37.853 #34 NEW cov: 12489 ft: 15381 corp: 16/267b lim: 50 exec/s: 34 rss: 74Mb L: 12/41 MS: 1 InsertByte- 00:12:38.111 [2024-10-14 17:29:34.949970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.111 [2024-10-14 17:29:34.949997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.111 [2024-10-14 17:29:34.950069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.111 [2024-10-14 17:29:34.950086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.111 [2024-10-14 17:29:34.950138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.111 [2024-10-14 17:29:34.950154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.111 #35 NEW cov: 12489 ft: 15385 corp: 17/306b lim: 50 exec/s: 35 rss: 74Mb L: 39/41 MS: 1 InsertRepeatedBytes- 00:12:38.111 [2024-10-14 17:29:34.990113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.111 [2024-10-14 17:29:34.990141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.111 [2024-10-14 17:29:34.990189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.111 [2024-10-14 17:29:34.990205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.111 [2024-10-14 17:29:34.990260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.111 [2024-10-14 17:29:34.990277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.111 #41 NEW cov: 12489 ft: 15426 corp: 18/345b lim: 50 exec/s: 41 rss: 75Mb L: 39/41 MS: 1 CMP- DE: "\000\000\377\377"- 00:12:38.111 [2024-10-14 17:29:35.050267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.111 [2024-10-14 17:29:35.050296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.111 [2024-10-14 17:29:35.050335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.111 [2024-10-14 17:29:35.050350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.112 [2024-10-14 17:29:35.050405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.112 [2024-10-14 17:29:35.050421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.112 #47 NEW cov: 12489 ft: 15441 corp: 19/382b lim: 50 exec/s: 47 rss: 75Mb L: 37/41 MS: 1 CMP- DE: "\377\377\377\001"- 00:12:38.112 [2024-10-14 17:29:35.090537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.112 [2024-10-14 17:29:35.090565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.112 [2024-10-14 17:29:35.090611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.112 [2024-10-14 17:29:35.090628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.112 [2024-10-14 17:29:35.090679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.112 [2024-10-14 17:29:35.090694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.112 [2024-10-14 17:29:35.090748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:38.112 [2024-10-14 17:29:35.090764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:38.112 #48 NEW cov: 12489 ft: 15472 corp: 20/424b lim: 50 exec/s: 48 rss: 75Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:12:38.112 [2024-10-14 17:29:35.150284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.112 [2024-10-14 17:29:35.150311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.112 #53 NEW cov: 12489 ft: 15577 corp: 21/442b lim: 50 exec/s: 53 rss: 75Mb L: 18/42 MS: 5 EraseBytes-ShuffleBytes-ChangeByte-ChangeBit-CopyPart- 00:12:38.112 [2024-10-14 17:29:35.190409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.112 [2024-10-14 17:29:35.190437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.371 #54 NEW cov: 12489 ft: 15603 corp: 22/452b lim: 50 exec/s: 54 rss: 75Mb L: 10/42 MS: 1 ShuffleBytes- 00:12:38.371 [2024-10-14 17:29:35.230915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.371 [2024-10-14 17:29:35.230943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.371 [2024-10-14 17:29:35.230991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.371 [2024-10-14 17:29:35.231007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.371 [2024-10-14 17:29:35.231083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.371 [2024-10-14 17:29:35.231111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.371 [2024-10-14 17:29:35.231164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:38.371 [2024-10-14 17:29:35.231180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:38.371 #55 NEW cov: 12489 ft: 15623 corp: 23/494b lim: 50 exec/s: 55 rss: 75Mb L: 42/42 MS: 1 ChangeByte- 00:12:38.371 [2024-10-14 17:29:35.290642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.371 [2024-10-14 17:29:35.290670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.371 #56 NEW cov: 12489 ft: 15638 corp: 24/505b lim: 50 exec/s: 56 rss: 75Mb L: 11/42 MS: 1 ChangeByte- 00:12:38.371 [2024-10-14 17:29:35.350826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.371 [2024-10-14 17:29:35.350853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.371 #57 NEW cov: 12489 ft: 15690 corp: 25/515b lim: 50 exec/s: 57 rss: 75Mb L: 10/42 MS: 1 ChangeBinInt- 00:12:38.371 [2024-10-14 17:29:35.391230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.371 [2024-10-14 17:29:35.391257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.371 [2024-10-14 17:29:35.391296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.371 [2024-10-14 17:29:35.391312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.371 [2024-10-14 17:29:35.391367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.371 [2024-10-14 17:29:35.391382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.371 #58 NEW cov: 12489 ft: 15703 corp: 26/546b lim: 50 exec/s: 58 rss: 75Mb L: 31/42 MS: 1 InsertRepeatedBytes- 00:12:38.371 [2024-10-14 17:29:35.451078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.371 [2024-10-14 17:29:35.451105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.630 #59 NEW cov: 12489 ft: 15807 corp: 27/557b lim: 50 exec/s: 59 rss: 75Mb L: 11/42 MS: 1 PersAutoDict- DE: "\203\033\210#\223)+\000"- 00:12:38.630 [2024-10-14 17:29:35.491211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.630 [2024-10-14 17:29:35.491238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.630 #60 NEW cov: 12489 ft: 15838 corp: 28/568b lim: 50 exec/s: 60 rss: 75Mb L: 11/42 MS: 1 InsertByte- 00:12:38.630 [2024-10-14 17:29:35.531666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.630 [2024-10-14 17:29:35.531694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.630 [2024-10-14 17:29:35.531740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.630 [2024-10-14 17:29:35.531756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.630 [2024-10-14 17:29:35.531809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.630 [2024-10-14 17:29:35.531825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.630 [2024-10-14 17:29:35.531877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:38.630 [2024-10-14 17:29:35.531892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:38.630 #61 NEW cov: 12489 ft: 15852 corp: 29/610b lim: 50 exec/s: 61 rss: 75Mb L: 42/42 MS: 1 PersAutoDict- DE: "\000\000\377\377"- 00:12:38.630 [2024-10-14 17:29:35.571592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.630 [2024-10-14 17:29:35.571620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.630 [2024-10-14 17:29:35.571660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.630 [2024-10-14 17:29:35.571681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.630 #62 NEW cov: 12489 ft: 16117 corp: 30/631b lim: 50 exec/s: 62 rss: 75Mb L: 21/42 MS: 1 EraseBytes- 00:12:38.630 [2024-10-14 17:29:35.611529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.630 [2024-10-14 17:29:35.611557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.630 #63 NEW cov: 12489 ft: 16132 corp: 31/649b lim: 50 exec/s: 63 rss: 75Mb L: 18/42 MS: 1 CopyPart- 00:12:38.630 [2024-10-14 17:29:35.672040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.630 [2024-10-14 17:29:35.672068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.630 [2024-10-14 17:29:35.672115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.630 [2024-10-14 17:29:35.672131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.630 [2024-10-14 17:29:35.672187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.630 [2024-10-14 17:29:35.672203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.630 #64 NEW cov: 12489 ft: 16145 corp: 32/688b lim: 50 exec/s: 64 rss: 75Mb L: 39/42 MS: 1 ChangeByte- 00:12:38.630 [2024-10-14 17:29:35.711831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.630 [2024-10-14 17:29:35.711859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.889 #65 NEW cov: 12489 ft: 16164 corp: 33/703b lim: 50 exec/s: 65 rss: 75Mb L: 15/42 MS: 1 CMP- DE: "\000\000\000\016"- 00:12:38.889 [2024-10-14 17:29:35.772473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.889 [2024-10-14 17:29:35.772500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.889 [2024-10-14 17:29:35.772547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.889 [2024-10-14 17:29:35.772563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.889 [2024-10-14 17:29:35.772617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.889 [2024-10-14 17:29:35.772633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.889 [2024-10-14 17:29:35.772685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:38.889 [2024-10-14 17:29:35.772701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:38.889 #66 NEW cov: 12489 ft: 16201 corp: 34/744b lim: 50 exec/s: 66 rss: 75Mb L: 41/42 MS: 1 InsertRepeatedBytes- 00:12:38.889 [2024-10-14 17:29:35.812109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.889 [2024-10-14 17:29:35.812138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.889 #67 NEW cov: 12489 ft: 16239 corp: 35/755b lim: 50 exec/s: 67 rss: 75Mb L: 11/42 MS: 1 ChangeBit- 00:12:38.889 [2024-10-14 17:29:35.852650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:12:38.889 [2024-10-14 17:29:35.852678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:38.889 [2024-10-14 17:29:35.852726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:12:38.889 [2024-10-14 17:29:35.852745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:38.889 [2024-10-14 17:29:35.852796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:12:38.889 [2024-10-14 17:29:35.852812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:38.889 [2024-10-14 17:29:35.852867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:12:38.889 [2024-10-14 17:29:35.852883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:38.889 #68 NEW cov: 12489 ft: 16251 corp: 36/796b lim: 50 exec/s: 34 rss: 75Mb L: 41/42 MS: 1 ChangeBit- 00:12:38.889 #68 DONE cov: 12489 ft: 16251 corp: 36/796b lim: 50 exec/s: 34 rss: 75Mb 00:12:38.889 ###### Recommended dictionary. ###### 00:12:38.889 "\203\033\210#\223)+\000" # Uses: 5 00:12:38.889 "\000\000\377\377" # Uses: 1 00:12:38.889 "\377\377\377\001" # Uses: 0 00:12:38.889 "\000\000\000\016" # Uses: 0 00:12:38.889 ###### End of recommended dictionary. ###### 00:12:38.889 Done 68 runs in 2 second(s) 00:12:39.148 17:29:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:12:39.148 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:39.148 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:39.148 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:12:39.148 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:12:39.148 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:39.149 17:29:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:12:39.149 [2024-10-14 17:29:36.046005] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:39.149 [2024-10-14 17:29:36.046084] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110960 ] 00:12:39.149 [2024-10-14 17:29:36.239004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.408 [2024-10-14 17:29:36.278024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.408 [2024-10-14 17:29:36.336979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.408 [2024-10-14 17:29:36.353145] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:12:39.408 INFO: Running with entropic power schedule (0xFF, 100). 00:12:39.408 INFO: Seed: 1539511744 00:12:39.408 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:39.408 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:39.408 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:12:39.408 INFO: A corpus is not provided, starting from an empty corpus 00:12:39.408 #2 INITED exec/s: 0 rss: 66Mb 00:12:39.408 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:39.408 This may also happen if the target rejected all inputs we tried so far 00:12:39.408 [2024-10-14 17:29:36.408563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:39.408 [2024-10-14 17:29:36.408593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:39.667 NEW_FUNC[1/714]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:12:39.667 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:39.667 #11 NEW cov: 12254 ft: 12238 corp: 2/27b lim: 85 exec/s: 0 rss: 74Mb L: 26/26 MS: 4 InsertByte-CopyPart-ChangeBit-InsertRepeatedBytes- 00:12:39.667 [2024-10-14 17:29:36.749523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:39.667 [2024-10-14 17:29:36.749583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:39.926 NEW_FUNC[1/2]: 0x1c026a8 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:595 00:12:39.926 NEW_FUNC[2/2]: 0x1c08388 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:962 00:12:39.926 #12 NEW cov: 12401 ft: 13099 corp: 3/53b lim: 85 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 ChangeByte- 00:12:39.926 [2024-10-14 17:29:36.819468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:39.926 [2024-10-14 17:29:36.819498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:39.926 #13 NEW cov: 12407 ft: 13391 corp: 4/79b lim: 85 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 CrossOver- 00:12:39.926 [2024-10-14 17:29:36.859721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:39.926 [2024-10-14 17:29:36.859749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:39.926 [2024-10-14 17:29:36.859787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:39.926 [2024-10-14 17:29:36.859804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:39.926 #14 NEW cov: 12492 ft: 14460 corp: 5/126b lim: 85 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:12:39.926 [2024-10-14 17:29:36.920038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:39.926 [2024-10-14 17:29:36.920065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:39.926 [2024-10-14 17:29:36.920112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:39.927 [2024-10-14 17:29:36.920128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:39.927 [2024-10-14 17:29:36.920183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:39.927 [2024-10-14 17:29:36.920201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:39.927 #15 NEW cov: 12492 ft: 14891 corp: 6/193b lim: 85 exec/s: 0 rss: 74Mb L: 67/67 MS: 1 CopyPart- 00:12:39.927 [2024-10-14 17:29:36.979891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:39.927 [2024-10-14 17:29:36.979919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:39.927 #16 NEW cov: 12492 ft: 15027 corp: 7/220b lim: 85 exec/s: 0 rss: 74Mb L: 27/67 MS: 1 InsertByte- 00:12:40.186 [2024-10-14 17:29:37.020369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.186 [2024-10-14 17:29:37.020397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.186 [2024-10-14 17:29:37.020445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.186 [2024-10-14 17:29:37.020462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.186 [2024-10-14 17:29:37.020516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.186 [2024-10-14 17:29:37.020530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.186 #17 NEW cov: 12492 ft: 15118 corp: 8/287b lim: 85 exec/s: 0 rss: 74Mb L: 67/67 MS: 1 CopyPart- 00:12:40.186 [2024-10-14 17:29:37.080338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.186 [2024-10-14 17:29:37.080365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.186 [2024-10-14 17:29:37.080414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.186 [2024-10-14 17:29:37.080430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.186 #18 NEW cov: 12492 ft: 15137 corp: 9/332b lim: 85 exec/s: 0 rss: 74Mb L: 45/67 MS: 1 CrossOver- 00:12:40.186 [2024-10-14 17:29:37.140518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.186 [2024-10-14 17:29:37.140547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.186 [2024-10-14 17:29:37.140601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.186 [2024-10-14 17:29:37.140618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.186 #19 NEW cov: 12492 ft: 15155 corp: 10/377b lim: 85 exec/s: 0 rss: 74Mb L: 45/67 MS: 1 ShuffleBytes- 00:12:40.186 [2024-10-14 17:29:37.200505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.186 [2024-10-14 17:29:37.200534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.186 #20 NEW cov: 12492 ft: 15192 corp: 11/403b lim: 85 exec/s: 0 rss: 74Mb L: 26/67 MS: 1 ChangeBit- 00:12:40.186 [2024-10-14 17:29:37.240933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.186 [2024-10-14 17:29:37.240960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.186 [2024-10-14 17:29:37.241004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.186 [2024-10-14 17:29:37.241020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.186 [2024-10-14 17:29:37.241096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.186 [2024-10-14 17:29:37.241116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.445 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:40.445 #21 NEW cov: 12515 ft: 15214 corp: 12/454b lim: 85 exec/s: 0 rss: 74Mb L: 51/67 MS: 1 CrossOver- 00:12:40.445 [2024-10-14 17:29:37.300951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.445 [2024-10-14 17:29:37.300979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.445 [2024-10-14 17:29:37.301018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.445 [2024-10-14 17:29:37.301041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.445 #22 NEW cov: 12515 ft: 15242 corp: 13/499b lim: 85 exec/s: 0 rss: 74Mb L: 45/67 MS: 1 ChangeByte- 00:12:40.445 [2024-10-14 17:29:37.341364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.445 [2024-10-14 17:29:37.341393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.445 [2024-10-14 17:29:37.341440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.445 [2024-10-14 17:29:37.341456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.445 [2024-10-14 17:29:37.341513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.445 [2024-10-14 17:29:37.341528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.445 [2024-10-14 17:29:37.341581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:40.445 [2024-10-14 17:29:37.341597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:40.445 #23 NEW cov: 12515 ft: 15666 corp: 14/579b lim: 85 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:12:40.445 [2024-10-14 17:29:37.401216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.445 [2024-10-14 17:29:37.401243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.445 [2024-10-14 17:29:37.401281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.445 [2024-10-14 17:29:37.401297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.445 #24 NEW cov: 12515 ft: 15684 corp: 15/624b lim: 85 exec/s: 24 rss: 75Mb L: 45/80 MS: 1 CopyPart- 00:12:40.445 [2024-10-14 17:29:37.461701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.445 [2024-10-14 17:29:37.461731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.446 [2024-10-14 17:29:37.461771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.446 [2024-10-14 17:29:37.461788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.446 [2024-10-14 17:29:37.461843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.446 [2024-10-14 17:29:37.461859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.446 [2024-10-14 17:29:37.461913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:40.446 [2024-10-14 17:29:37.461933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:40.446 #25 NEW cov: 12515 ft: 15710 corp: 16/695b lim: 85 exec/s: 25 rss: 75Mb L: 71/80 MS: 1 CrossOver- 00:12:40.446 [2024-10-14 17:29:37.501763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.446 [2024-10-14 17:29:37.501791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.446 [2024-10-14 17:29:37.501841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.446 [2024-10-14 17:29:37.501857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.446 [2024-10-14 17:29:37.501909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.446 [2024-10-14 17:29:37.501924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.446 [2024-10-14 17:29:37.501980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:40.446 [2024-10-14 17:29:37.501995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:40.705 #26 NEW cov: 12515 ft: 15770 corp: 17/779b lim: 85 exec/s: 26 rss: 75Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:12:40.705 [2024-10-14 17:29:37.561660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.705 [2024-10-14 17:29:37.561687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.561740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.705 [2024-10-14 17:29:37.561755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.705 #27 NEW cov: 12515 ft: 15788 corp: 18/824b lim: 85 exec/s: 27 rss: 75Mb L: 45/84 MS: 1 ChangeBinInt- 00:12:40.705 [2024-10-14 17:29:37.621964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.705 [2024-10-14 17:29:37.621992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.622046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.705 [2024-10-14 17:29:37.622079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.622136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.705 [2024-10-14 17:29:37.622152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.705 #28 NEW cov: 12515 ft: 15809 corp: 19/876b lim: 85 exec/s: 28 rss: 75Mb L: 52/84 MS: 1 InsertByte- 00:12:40.705 [2024-10-14 17:29:37.681993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.705 [2024-10-14 17:29:37.682020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.682074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.705 [2024-10-14 17:29:37.682090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.705 #29 NEW cov: 12515 ft: 15821 corp: 20/921b lim: 85 exec/s: 29 rss: 75Mb L: 45/84 MS: 1 ShuffleBytes- 00:12:40.705 [2024-10-14 17:29:37.722392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.705 [2024-10-14 17:29:37.722424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.722461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.705 [2024-10-14 17:29:37.722478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.722532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.705 [2024-10-14 17:29:37.722547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.722602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:40.705 [2024-10-14 17:29:37.722616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:40.705 #30 NEW cov: 12515 ft: 15852 corp: 21/989b lim: 85 exec/s: 30 rss: 75Mb L: 68/84 MS: 1 InsertByte- 00:12:40.705 [2024-10-14 17:29:37.762483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.705 [2024-10-14 17:29:37.762511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.762559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.705 [2024-10-14 17:29:37.762575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.762629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:40.705 [2024-10-14 17:29:37.762646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:40.705 [2024-10-14 17:29:37.762700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:40.705 [2024-10-14 17:29:37.762715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:40.705 #34 NEW cov: 12515 ft: 15858 corp: 22/1061b lim: 85 exec/s: 34 rss: 75Mb L: 72/84 MS: 4 InsertByte-ChangeByte-ChangeByte-CrossOver- 00:12:40.964 [2024-10-14 17:29:37.802310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.965 [2024-10-14 17:29:37.802336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.965 [2024-10-14 17:29:37.802389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.965 [2024-10-14 17:29:37.802405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.965 #35 NEW cov: 12515 ft: 15934 corp: 23/1107b lim: 85 exec/s: 35 rss: 75Mb L: 46/84 MS: 1 InsertByte- 00:12:40.965 [2024-10-14 17:29:37.842441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.965 [2024-10-14 17:29:37.842468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.965 [2024-10-14 17:29:37.842518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.965 [2024-10-14 17:29:37.842534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.965 #36 NEW cov: 12515 ft: 15970 corp: 24/1152b lim: 85 exec/s: 36 rss: 75Mb L: 45/84 MS: 1 ChangeByte- 00:12:40.965 [2024-10-14 17:29:37.882476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.965 [2024-10-14 17:29:37.882503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.965 [2024-10-14 17:29:37.882568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.965 [2024-10-14 17:29:37.882584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:40.965 #37 NEW cov: 12515 ft: 15987 corp: 25/1193b lim: 85 exec/s: 37 rss: 75Mb L: 41/84 MS: 1 EraseBytes- 00:12:40.965 [2024-10-14 17:29:37.942566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.965 [2024-10-14 17:29:37.942594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.965 #43 NEW cov: 12515 ft: 15989 corp: 26/1223b lim: 85 exec/s: 43 rss: 75Mb L: 30/84 MS: 1 InsertRepeatedBytes- 00:12:40.965 [2024-10-14 17:29:38.002714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.965 [2024-10-14 17:29:38.002741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.965 #44 NEW cov: 12515 ft: 16014 corp: 27/1256b lim: 85 exec/s: 44 rss: 75Mb L: 33/84 MS: 1 CopyPart- 00:12:40.965 [2024-10-14 17:29:38.042964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:40.965 [2024-10-14 17:29:38.042991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:40.965 [2024-10-14 17:29:38.043032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:40.965 [2024-10-14 17:29:38.043064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.224 #45 NEW cov: 12515 ft: 16023 corp: 28/1303b lim: 85 exec/s: 45 rss: 75Mb L: 47/84 MS: 1 CMP- DE: "\004\000\000\000"- 00:12:41.224 [2024-10-14 17:29:38.082894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.224 [2024-10-14 17:29:38.082921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.224 #46 NEW cov: 12515 ft: 16044 corp: 29/1334b lim: 85 exec/s: 46 rss: 75Mb L: 31/84 MS: 1 InsertByte- 00:12:41.224 [2024-10-14 17:29:38.143437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.224 [2024-10-14 17:29:38.143465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.224 [2024-10-14 17:29:38.143513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:41.224 [2024-10-14 17:29:38.143529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.224 [2024-10-14 17:29:38.143584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:41.224 [2024-10-14 17:29:38.143600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:41.224 #47 NEW cov: 12515 ft: 16055 corp: 30/1387b lim: 85 exec/s: 47 rss: 75Mb L: 53/84 MS: 1 CMP- DE: "\377*)\224\356g\270t"- 00:12:41.224 [2024-10-14 17:29:38.183320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.224 [2024-10-14 17:29:38.183348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.224 [2024-10-14 17:29:38.183393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:41.224 [2024-10-14 17:29:38.183410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.224 #48 NEW cov: 12515 ft: 16065 corp: 31/1422b lim: 85 exec/s: 48 rss: 75Mb L: 35/84 MS: 1 EraseBytes- 00:12:41.224 [2024-10-14 17:29:38.243546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.224 [2024-10-14 17:29:38.243573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.224 [2024-10-14 17:29:38.243628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:41.224 [2024-10-14 17:29:38.243645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.224 #49 NEW cov: 12515 ft: 16074 corp: 32/1472b lim: 85 exec/s: 49 rss: 75Mb L: 50/84 MS: 1 PersAutoDict- DE: "\004\000\000\000"- 00:12:41.224 [2024-10-14 17:29:38.303659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.224 [2024-10-14 17:29:38.303687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.224 [2024-10-14 17:29:38.303726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:41.224 [2024-10-14 17:29:38.303743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.484 #50 NEW cov: 12515 ft: 16077 corp: 33/1517b lim: 85 exec/s: 50 rss: 76Mb L: 45/84 MS: 1 ChangeByte- 00:12:41.484 [2024-10-14 17:29:38.344111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.484 [2024-10-14 17:29:38.344139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.484 [2024-10-14 17:29:38.344187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:41.484 [2024-10-14 17:29:38.344203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.484 [2024-10-14 17:29:38.344258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:41.484 [2024-10-14 17:29:38.344275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:41.484 [2024-10-14 17:29:38.344327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:41.484 [2024-10-14 17:29:38.344344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:41.484 #51 NEW cov: 12515 ft: 16079 corp: 34/1588b lim: 85 exec/s: 51 rss: 76Mb L: 71/84 MS: 1 ShuffleBytes- 00:12:41.484 [2024-10-14 17:29:38.384197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:12:41.484 [2024-10-14 17:29:38.384224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:41.484 [2024-10-14 17:29:38.384271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:12:41.484 [2024-10-14 17:29:38.384286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:41.484 [2024-10-14 17:29:38.384341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:12:41.484 [2024-10-14 17:29:38.384356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:41.484 [2024-10-14 17:29:38.384411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:12:41.484 [2024-10-14 17:29:38.384426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:41.484 #52 NEW cov: 12515 ft: 16081 corp: 35/1666b lim: 85 exec/s: 26 rss: 76Mb L: 78/84 MS: 1 CopyPart- 00:12:41.484 #52 DONE cov: 12515 ft: 16081 corp: 35/1666b lim: 85 exec/s: 26 rss: 76Mb 00:12:41.484 ###### Recommended dictionary. ###### 00:12:41.484 "\004\000\000\000" # Uses: 1 00:12:41.484 "\377*)\224\356g\270t" # Uses: 0 00:12:41.484 ###### End of recommended dictionary. ###### 00:12:41.484 Done 52 runs in 2 second(s) 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:41.484 17:29:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:12:41.484 [2024-10-14 17:29:38.560694] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:41.484 [2024-10-14 17:29:38.560774] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111229 ] 00:12:41.743 [2024-10-14 17:29:38.757318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.743 [2024-10-14 17:29:38.796322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.003 [2024-10-14 17:29:38.855378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.003 [2024-10-14 17:29:38.871528] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:12:42.003 INFO: Running with entropic power schedule (0xFF, 100). 00:12:42.003 INFO: Seed: 4055499562 00:12:42.003 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:42.003 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:42.003 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:12:42.003 INFO: A corpus is not provided, starting from an empty corpus 00:12:42.003 #2 INITED exec/s: 0 rss: 66Mb 00:12:42.003 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:42.003 This may also happen if the target rejected all inputs we tried so far 00:12:42.003 [2024-10-14 17:29:38.930727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.003 [2024-10-14 17:29:38.930757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.003 [2024-10-14 17:29:38.930810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.003 [2024-10-14 17:29:38.930827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.003 [2024-10-14 17:29:38.930880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.003 [2024-10-14 17:29:38.930896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.003 [2024-10-14 17:29:38.930949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:42.003 [2024-10-14 17:29:38.930964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:42.262 NEW_FUNC[1/715]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:12:42.262 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:42.262 #7 NEW cov: 12221 ft: 12190 corp: 2/24b lim: 25 exec/s: 0 rss: 74Mb L: 23/23 MS: 5 ChangeByte-ChangeBit-ChangeByte-CopyPart-InsertRepeatedBytes- 00:12:42.262 [2024-10-14 17:29:39.271480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.262 [2024-10-14 17:29:39.271539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.262 [2024-10-14 17:29:39.271621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.262 [2024-10-14 17:29:39.271651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.262 #12 NEW cov: 12334 ft: 13246 corp: 3/34b lim: 25 exec/s: 0 rss: 74Mb L: 10/23 MS: 5 ShuffleBytes-InsertByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:12:42.262 [2024-10-14 17:29:39.321566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.262 [2024-10-14 17:29:39.321595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.262 [2024-10-14 17:29:39.321638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.262 [2024-10-14 17:29:39.321654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.262 [2024-10-14 17:29:39.321709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.262 [2024-10-14 17:29:39.321726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.522 #13 NEW cov: 12340 ft: 13692 corp: 4/52b lim: 25 exec/s: 0 rss: 74Mb L: 18/23 MS: 1 EraseBytes- 00:12:42.522 [2024-10-14 17:29:39.381882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.522 [2024-10-14 17:29:39.381911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.381962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.522 [2024-10-14 17:29:39.381978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.382036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.522 [2024-10-14 17:29:39.382053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.382110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:42.522 [2024-10-14 17:29:39.382131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:42.522 #18 NEW cov: 12425 ft: 13917 corp: 5/76b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 5 ShuffleBytes-InsertByte-ChangeBit-InsertByte-InsertRepeatedBytes- 00:12:42.522 [2024-10-14 17:29:39.421936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.522 [2024-10-14 17:29:39.421963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.422018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.522 [2024-10-14 17:29:39.422037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.422093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.522 [2024-10-14 17:29:39.422109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.422165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:42.522 [2024-10-14 17:29:39.422181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:42.522 #19 NEW cov: 12425 ft: 14141 corp: 6/100b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CMP- DE: "\003\000\000\000\000\000\000\000"- 00:12:42.522 [2024-10-14 17:29:39.481785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.522 [2024-10-14 17:29:39.481813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.522 #21 NEW cov: 12425 ft: 14583 corp: 7/107b lim: 25 exec/s: 0 rss: 74Mb L: 7/24 MS: 2 CrossOver-CrossOver- 00:12:42.522 [2024-10-14 17:29:39.542365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.522 [2024-10-14 17:29:39.542393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.542441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.522 [2024-10-14 17:29:39.542458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.542513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.522 [2024-10-14 17:29:39.542528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.522 [2024-10-14 17:29:39.542582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:42.522 [2024-10-14 17:29:39.542598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:42.522 #22 NEW cov: 12425 ft: 14665 corp: 8/127b lim: 25 exec/s: 0 rss: 74Mb L: 20/24 MS: 1 CrossOver- 00:12:42.522 [2024-10-14 17:29:39.602139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.522 [2024-10-14 17:29:39.602168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.782 #23 NEW cov: 12425 ft: 14825 corp: 9/134b lim: 25 exec/s: 0 rss: 74Mb L: 7/24 MS: 1 ShuffleBytes- 00:12:42.782 [2024-10-14 17:29:39.662515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.782 [2024-10-14 17:29:39.662544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.662582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.782 [2024-10-14 17:29:39.662602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.662659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.782 [2024-10-14 17:29:39.662675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.782 #24 NEW cov: 12425 ft: 14912 corp: 10/149b lim: 25 exec/s: 0 rss: 74Mb L: 15/24 MS: 1 PersAutoDict- DE: "\003\000\000\000\000\000\000\000"- 00:12:42.782 [2024-10-14 17:29:39.722434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.782 [2024-10-14 17:29:39.722461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.782 #25 NEW cov: 12425 ft: 14975 corp: 11/155b lim: 25 exec/s: 0 rss: 74Mb L: 6/24 MS: 1 EraseBytes- 00:12:42.782 [2024-10-14 17:29:39.762677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.782 [2024-10-14 17:29:39.762704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.762744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.782 [2024-10-14 17:29:39.762759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.782 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:42.782 #26 NEW cov: 12448 ft: 15054 corp: 12/166b lim: 25 exec/s: 0 rss: 74Mb L: 11/24 MS: 1 InsertByte- 00:12:42.782 [2024-10-14 17:29:39.822945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.782 [2024-10-14 17:29:39.822972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.823013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.782 [2024-10-14 17:29:39.823033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.823090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.782 [2024-10-14 17:29:39.823106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.782 #32 NEW cov: 12448 ft: 15089 corp: 13/184b lim: 25 exec/s: 0 rss: 74Mb L: 18/24 MS: 1 PersAutoDict- DE: "\003\000\000\000\000\000\000\000"- 00:12:42.782 [2024-10-14 17:29:39.863196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:42.782 [2024-10-14 17:29:39.863224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.863272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:42.782 [2024-10-14 17:29:39.863289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.863345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:42.782 [2024-10-14 17:29:39.863361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:42.782 [2024-10-14 17:29:39.863418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:42.782 [2024-10-14 17:29:39.863434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.041 #33 NEW cov: 12448 ft: 15126 corp: 14/208b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ChangeBinInt- 00:12:43.041 [2024-10-14 17:29:39.903454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.041 [2024-10-14 17:29:39.903482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:39.903537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.041 [2024-10-14 17:29:39.903554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:39.903609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.041 [2024-10-14 17:29:39.903626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:39.903684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.041 [2024-10-14 17:29:39.903700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:39.903757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:12:43.041 [2024-10-14 17:29:39.903774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:43.041 #34 NEW cov: 12448 ft: 15205 corp: 15/233b lim: 25 exec/s: 34 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:12:43.041 [2024-10-14 17:29:39.943057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.041 [2024-10-14 17:29:39.943085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.041 #35 NEW cov: 12448 ft: 15253 corp: 16/240b lim: 25 exec/s: 35 rss: 74Mb L: 7/25 MS: 1 ChangeBinInt- 00:12:43.041 [2024-10-14 17:29:39.983142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.041 [2024-10-14 17:29:39.983169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.041 #36 NEW cov: 12448 ft: 15289 corp: 17/248b lim: 25 exec/s: 36 rss: 74Mb L: 8/25 MS: 1 InsertByte- 00:12:43.041 [2024-10-14 17:29:40.023653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.041 [2024-10-14 17:29:40.023684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:40.023733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.041 [2024-10-14 17:29:40.023750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:40.023805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.041 [2024-10-14 17:29:40.023822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:40.023879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.041 [2024-10-14 17:29:40.023897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.041 #37 NEW cov: 12448 ft: 15325 corp: 18/272b lim: 25 exec/s: 37 rss: 74Mb L: 24/25 MS: 1 ChangeByte- 00:12:43.041 [2024-10-14 17:29:40.083891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.041 [2024-10-14 17:29:40.083924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:40.083964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.041 [2024-10-14 17:29:40.083984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:40.084039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.041 [2024-10-14 17:29:40.084056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.041 [2024-10-14 17:29:40.084113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.041 [2024-10-14 17:29:40.084129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.041 #38 NEW cov: 12448 ft: 15346 corp: 19/295b lim: 25 exec/s: 38 rss: 75Mb L: 23/25 MS: 1 ShuffleBytes- 00:12:43.042 [2024-10-14 17:29:40.124054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.042 [2024-10-14 17:29:40.124087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.042 [2024-10-14 17:29:40.124136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.042 [2024-10-14 17:29:40.124153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.042 [2024-10-14 17:29:40.124211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.042 [2024-10-14 17:29:40.124227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.042 [2024-10-14 17:29:40.124285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.042 [2024-10-14 17:29:40.124301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.042 [2024-10-14 17:29:40.124361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:12:43.042 [2024-10-14 17:29:40.124377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:43.301 #39 NEW cov: 12448 ft: 15412 corp: 20/320b lim: 25 exec/s: 39 rss: 75Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:12:43.301 [2024-10-14 17:29:40.164043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.301 [2024-10-14 17:29:40.164072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.164128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.301 [2024-10-14 17:29:40.164145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.164202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.301 [2024-10-14 17:29:40.164219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.164274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.301 [2024-10-14 17:29:40.164290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.301 #40 NEW cov: 12448 ft: 15439 corp: 21/341b lim: 25 exec/s: 40 rss: 75Mb L: 21/25 MS: 1 EraseBytes- 00:12:43.301 [2024-10-14 17:29:40.224100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.301 [2024-10-14 17:29:40.224129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.224173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.301 [2024-10-14 17:29:40.224189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.224247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.301 [2024-10-14 17:29:40.224263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.301 #41 NEW cov: 12448 ft: 15489 corp: 22/357b lim: 25 exec/s: 41 rss: 75Mb L: 16/25 MS: 1 CrossOver- 00:12:43.301 [2024-10-14 17:29:40.264506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.301 [2024-10-14 17:29:40.264535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.264585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.301 [2024-10-14 17:29:40.264601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.264656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.301 [2024-10-14 17:29:40.264673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.264727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.301 [2024-10-14 17:29:40.264742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.264799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:12:43.301 [2024-10-14 17:29:40.264813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:43.301 #42 NEW cov: 12448 ft: 15495 corp: 23/382b lim: 25 exec/s: 42 rss: 75Mb L: 25/25 MS: 1 ChangeBit- 00:12:43.301 [2024-10-14 17:29:40.324518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.301 [2024-10-14 17:29:40.324546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.324601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.301 [2024-10-14 17:29:40.324618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.324679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.301 [2024-10-14 17:29:40.324696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.324754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.301 [2024-10-14 17:29:40.324770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.301 #43 NEW cov: 12448 ft: 15519 corp: 24/406b lim: 25 exec/s: 43 rss: 75Mb L: 24/25 MS: 1 ShuffleBytes- 00:12:43.301 [2024-10-14 17:29:40.364513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.301 [2024-10-14 17:29:40.364541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.364588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.301 [2024-10-14 17:29:40.364605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.301 [2024-10-14 17:29:40.364670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.301 [2024-10-14 17:29:40.364686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.561 #46 NEW cov: 12448 ft: 15557 corp: 25/421b lim: 25 exec/s: 46 rss: 75Mb L: 15/25 MS: 3 CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:12:43.561 [2024-10-14 17:29:40.424764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.561 [2024-10-14 17:29:40.424792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.424847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.561 [2024-10-14 17:29:40.424863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.424919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.561 [2024-10-14 17:29:40.424935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.424993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.561 [2024-10-14 17:29:40.425009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.561 #47 NEW cov: 12448 ft: 15564 corp: 26/445b lim: 25 exec/s: 47 rss: 75Mb L: 24/25 MS: 1 ChangeByte- 00:12:43.561 [2024-10-14 17:29:40.464864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.561 [2024-10-14 17:29:40.464892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.464948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.561 [2024-10-14 17:29:40.464965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.465022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.561 [2024-10-14 17:29:40.465042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.465101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.561 [2024-10-14 17:29:40.465116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.561 #48 NEW cov: 12448 ft: 15571 corp: 27/466b lim: 25 exec/s: 48 rss: 75Mb L: 21/25 MS: 1 PersAutoDict- DE: "\003\000\000\000\000\000\000\000"- 00:12:43.561 [2024-10-14 17:29:40.525285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.561 [2024-10-14 17:29:40.525313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.525370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.561 [2024-10-14 17:29:40.525387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.525444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.561 [2024-10-14 17:29:40.525460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.525518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.561 [2024-10-14 17:29:40.525535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.525598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:12:43.561 [2024-10-14 17:29:40.525615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:43.561 #49 NEW cov: 12448 ft: 15588 corp: 28/491b lim: 25 exec/s: 49 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:12:43.561 [2024-10-14 17:29:40.565023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.561 [2024-10-14 17:29:40.565054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.565104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.561 [2024-10-14 17:29:40.565121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.565181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.561 [2024-10-14 17:29:40.565197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.561 #50 NEW cov: 12448 ft: 15626 corp: 29/507b lim: 25 exec/s: 50 rss: 75Mb L: 16/25 MS: 1 ShuffleBytes- 00:12:43.561 [2024-10-14 17:29:40.625511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.561 [2024-10-14 17:29:40.625539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.625599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.561 [2024-10-14 17:29:40.625615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.625675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.561 [2024-10-14 17:29:40.625692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.625748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.561 [2024-10-14 17:29:40.625765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.561 [2024-10-14 17:29:40.625820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:12:43.561 [2024-10-14 17:29:40.625836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:12:43.822 #51 NEW cov: 12448 ft: 15658 corp: 30/532b lim: 25 exec/s: 51 rss: 75Mb L: 25/25 MS: 1 InsertByte- 00:12:43.822 [2024-10-14 17:29:40.685552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.822 [2024-10-14 17:29:40.685580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.685633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.822 [2024-10-14 17:29:40.685650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.685708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.822 [2024-10-14 17:29:40.685725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.685784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.822 [2024-10-14 17:29:40.685803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.822 #52 NEW cov: 12448 ft: 15666 corp: 31/555b lim: 25 exec/s: 52 rss: 75Mb L: 23/25 MS: 1 ShuffleBytes- 00:12:43.822 [2024-10-14 17:29:40.725640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.822 [2024-10-14 17:29:40.725668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.725717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.822 [2024-10-14 17:29:40.725733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.725790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.822 [2024-10-14 17:29:40.725806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.725864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.822 [2024-10-14 17:29:40.725880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.822 #53 NEW cov: 12448 ft: 15711 corp: 32/579b lim: 25 exec/s: 53 rss: 76Mb L: 24/25 MS: 1 CopyPart- 00:12:43.822 [2024-10-14 17:29:40.785721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.822 [2024-10-14 17:29:40.785750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.785791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.822 [2024-10-14 17:29:40.785807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.785865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.822 [2024-10-14 17:29:40.785883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.822 #54 NEW cov: 12448 ft: 15773 corp: 33/598b lim: 25 exec/s: 54 rss: 76Mb L: 19/25 MS: 1 InsertRepeatedBytes- 00:12:43.822 [2024-10-14 17:29:40.845975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.822 [2024-10-14 17:29:40.846003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.846058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.822 [2024-10-14 17:29:40.846075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.846130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.822 [2024-10-14 17:29:40.846146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.846202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:12:43.822 [2024-10-14 17:29:40.846218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:43.822 #55 NEW cov: 12448 ft: 15781 corp: 34/622b lim: 25 exec/s: 55 rss: 76Mb L: 24/25 MS: 1 InsertByte- 00:12:43.822 [2024-10-14 17:29:40.906018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:12:43.822 [2024-10-14 17:29:40.906050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.906091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:12:43.822 [2024-10-14 17:29:40.906108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:43.822 [2024-10-14 17:29:40.906165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:12:43.822 [2024-10-14 17:29:40.906180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:44.082 #56 NEW cov: 12448 ft: 15784 corp: 35/640b lim: 25 exec/s: 28 rss: 76Mb L: 18/25 MS: 1 InsertRepeatedBytes- 00:12:44.082 #56 DONE cov: 12448 ft: 15784 corp: 35/640b lim: 25 exec/s: 28 rss: 76Mb 00:12:44.082 ###### Recommended dictionary. ###### 00:12:44.082 "\003\000\000\000\000\000\000\000" # Uses: 3 00:12:44.082 ###### End of recommended dictionary. ###### 00:12:44.082 Done 56 runs in 2 second(s) 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:12:44.082 17:29:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:12:44.082 [2024-10-14 17:29:41.106685] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:44.082 [2024-10-14 17:29:41.106756] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111570 ] 00:12:44.341 [2024-10-14 17:29:41.303836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.341 [2024-10-14 17:29:41.342737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.341 [2024-10-14 17:29:41.401888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.341 [2024-10-14 17:29:41.418054] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:12:44.341 INFO: Running with entropic power schedule (0xFF, 100). 00:12:44.341 INFO: Seed: 2308539805 00:12:44.601 INFO: Loaded 1 modules (385370 inline 8-bit counters): 385370 [0x2c03dcc, 0x2c61f26), 00:12:44.601 INFO: Loaded 1 PC tables (385370 PCs): 385370 [0x2c61f28,0x32434c8), 00:12:44.601 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:12:44.601 INFO: A corpus is not provided, starting from an empty corpus 00:12:44.601 #2 INITED exec/s: 0 rss: 66Mb 00:12:44.601 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:44.601 This may also happen if the target rejected all inputs we tried so far 00:12:44.601 [2024-10-14 17:29:41.485342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321146 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.601 [2024-10-14 17:29:41.485385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:44.601 [2024-10-14 17:29:41.485493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.601 [2024-10-14 17:29:41.485513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:44.860 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:12:44.860 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:12:44.860 #6 NEW cov: 12293 ft: 12294 corp: 2/58b lim: 100 exec/s: 0 rss: 74Mb L: 57/57 MS: 4 ChangeBit-ChangeBinInt-InsertRepeatedBytes-InsertRepeatedBytes- 00:12:44.860 [2024-10-14 17:29:41.826784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.826837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:44.860 [2024-10-14 17:29:41.826924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.826947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:44.860 [2024-10-14 17:29:41.827042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.827064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:44.860 [2024-10-14 17:29:41.827175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.827201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:44.860 #8 NEW cov: 12406 ft: 13333 corp: 3/152b lim: 100 exec/s: 0 rss: 74Mb L: 94/94 MS: 2 ChangeBit-InsertRepeatedBytes- 00:12:44.860 [2024-10-14 17:29:41.886983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.887017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:44.860 [2024-10-14 17:29:41.887090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.887114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:44.860 [2024-10-14 17:29:41.887162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.887182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:44.860 [2024-10-14 17:29:41.887273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:44.860 [2024-10-14 17:29:41.887291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:44.860 #9 NEW cov: 12412 ft: 13559 corp: 4/246b lim: 100 exec/s: 0 rss: 74Mb L: 94/94 MS: 1 ChangeBinInt- 00:12:45.119 [2024-10-14 17:29:41.956889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:41.956919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.119 [2024-10-14 17:29:41.956995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:41.957014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.119 [2024-10-14 17:29:41.957075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:41.957094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.119 #12 NEW cov: 12497 ft: 14083 corp: 5/307b lim: 100 exec/s: 0 rss: 74Mb L: 61/94 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:12:45.119 [2024-10-14 17:29:42.007438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:42.007467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.119 [2024-10-14 17:29:42.007538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9910603898718829705 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:42.007556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.119 [2024-10-14 17:29:42.007624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:42.007640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.119 [2024-10-14 17:29:42.007725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:42.007745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.119 #13 NEW cov: 12497 ft: 14158 corp: 6/405b lim: 100 exec/s: 0 rss: 74Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:12:45.119 [2024-10-14 17:29:42.077017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321146 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:42.077049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.119 [2024-10-14 17:29:42.077136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.119 [2024-10-14 17:29:42.077153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.119 #14 NEW cov: 12497 ft: 14225 corp: 7/462b lim: 100 exec/s: 0 rss: 74Mb L: 57/98 MS: 1 ShuffleBytes- 00:12:45.120 [2024-10-14 17:29:42.147987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.148015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.120 [2024-10-14 17:29:42.148106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.148124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.120 [2024-10-14 17:29:42.148190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.148208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.120 [2024-10-14 17:29:42.148297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.148315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.120 #15 NEW cov: 12497 ft: 14270 corp: 8/556b lim: 100 exec/s: 0 rss: 74Mb L: 94/98 MS: 1 ChangeBit- 00:12:45.120 [2024-10-14 17:29:42.198165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.198196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.120 [2024-10-14 17:29:42.198261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.198281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.120 [2024-10-14 17:29:42.198351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.198367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.120 [2024-10-14 17:29:42.198460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.120 [2024-10-14 17:29:42.198478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.380 #16 NEW cov: 12497 ft: 14295 corp: 9/647b lim: 100 exec/s: 0 rss: 74Mb L: 91/98 MS: 1 EraseBytes- 00:12:45.380 [2024-10-14 17:29:42.247720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321146 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.247749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.247830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446670118667681791 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.247845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.380 #17 NEW cov: 12497 ft: 14367 corp: 10/697b lim: 100 exec/s: 0 rss: 74Mb L: 50/98 MS: 1 CrossOver- 00:12:45.380 [2024-10-14 17:29:42.318697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952491108940988 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.318728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.318788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.318804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.318860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.318877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.318961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.318979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.380 #22 NEW cov: 12497 ft: 14424 corp: 11/790b lim: 100 exec/s: 0 rss: 74Mb L: 93/98 MS: 5 ChangeByte-ChangeByte-CopyPart-CopyPart-CrossOver- 00:12:45.380 [2024-10-14 17:29:42.369079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.369109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.369182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.369204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.369275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:3166485692 len:95 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.369291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.369378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.369394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.380 NEW_FUNC[1/1]: 0x1c09658 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:45.380 #23 NEW cov: 12520 ft: 14472 corp: 12/884b lim: 100 exec/s: 0 rss: 74Mb L: 94/98 MS: 1 ChangeBinInt- 00:12:45.380 [2024-10-14 17:29:42.419274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.419302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.419364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.419381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.419447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13455272147882261178 len:47803 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.419465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.380 [2024-10-14 17:29:42.419556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744072547401727 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.380 [2024-10-14 17:29:42.419578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.380 #24 NEW cov: 12520 ft: 14512 corp: 13/975b lim: 100 exec/s: 24 rss: 75Mb L: 91/98 MS: 1 InsertRepeatedBytes- 00:12:45.639 [2024-10-14 17:29:42.489547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.489579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.639 [2024-10-14 17:29:42.489656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.489673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.639 [2024-10-14 17:29:42.489742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.489759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.639 [2024-10-14 17:29:42.489853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.489872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.639 #25 NEW cov: 12520 ft: 14595 corp: 14/1071b lim: 100 exec/s: 25 rss: 75Mb L: 96/98 MS: 1 CMP- DE: "\031\001"- 00:12:45.639 [2024-10-14 17:29:42.559820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.559851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.639 [2024-10-14 17:29:42.559928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446667911054491647 len:47803 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.559946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.639 [2024-10-14 17:29:42.560034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.560052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.639 #26 NEW cov: 12520 ft: 14622 corp: 15/1132b lim: 100 exec/s: 26 rss: 75Mb L: 61/98 MS: 1 CrossOver- 00:12:45.639 [2024-10-14 17:29:42.610405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952491108940988 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.639 [2024-10-14 17:29:42.610436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.639 [2024-10-14 17:29:42.610519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.640 [2024-10-14 17:29:42.610539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.640 [2024-10-14 17:29:42.610602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.640 [2024-10-14 17:29:42.610619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.640 [2024-10-14 17:29:42.610707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.640 [2024-10-14 17:29:42.610728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.640 #27 NEW cov: 12520 ft: 14651 corp: 16/1229b lim: 100 exec/s: 27 rss: 75Mb L: 97/98 MS: 1 InsertRepeatedBytes- 00:12:45.640 [2024-10-14 17:29:42.690443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.640 [2024-10-14 17:29:42.690472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.640 [2024-10-14 17:29:42.690545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.640 [2024-10-14 17:29:42.690564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.640 [2024-10-14 17:29:42.690623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.640 [2024-10-14 17:29:42.690641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.640 #28 NEW cov: 12520 ft: 14697 corp: 17/1305b lim: 100 exec/s: 28 rss: 75Mb L: 76/98 MS: 1 EraseBytes- 00:12:45.899 [2024-10-14 17:29:42.740530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321146 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.740561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.740632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446670118667681791 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.740651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.899 #34 NEW cov: 12520 ft: 14727 corp: 18/1356b lim: 100 exec/s: 34 rss: 75Mb L: 51/98 MS: 1 InsertByte- 00:12:45.899 [2024-10-14 17:29:42.811747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952491108940988 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.811781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.811858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.811879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.811946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.811968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.812067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.812088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.899 #35 NEW cov: 12520 ft: 14775 corp: 19/1450b lim: 100 exec/s: 35 rss: 75Mb L: 94/98 MS: 1 InsertByte- 00:12:45.899 [2024-10-14 17:29:42.862260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952491108940988 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.862291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.862362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.862382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.862448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.862465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.862554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.862574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:45.899 #36 NEW cov: 12520 ft: 14825 corp: 20/1546b lim: 100 exec/s: 36 rss: 75Mb L: 96/98 MS: 1 PersAutoDict- DE: "\031\001"- 00:12:45.899 [2024-10-14 17:29:42.931723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321146 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.931753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.931827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.931843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:45.899 #37 NEW cov: 12520 ft: 14870 corp: 21/1592b lim: 100 exec/s: 37 rss: 75Mb L: 46/98 MS: 1 EraseBytes- 00:12:45.899 [2024-10-14 17:29:42.981918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321005 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.981948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:45.899 [2024-10-14 17:29:42.982023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:45.899 [2024-10-14 17:29:42.982045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.158 #38 NEW cov: 12520 ft: 14887 corp: 22/1649b lim: 100 exec/s: 38 rss: 75Mb L: 57/98 MS: 1 ChangeByte- 00:12:46.158 [2024-10-14 17:29:43.032913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069464915967 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.158 [2024-10-14 17:29:43.032943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.158 [2024-10-14 17:29:43.033009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.158 [2024-10-14 17:29:43.033031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.033101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.033117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.033205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.033227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:46.159 #39 NEW cov: 12520 ft: 14890 corp: 23/1734b lim: 100 exec/s: 39 rss: 75Mb L: 85/98 MS: 1 CrossOver- 00:12:46.159 [2024-10-14 17:29:43.083213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952491108940988 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.083242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.083319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.083336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.083413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.083429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.083519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48313 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.083537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:46.159 #40 NEW cov: 12520 ft: 14906 corp: 24/1831b lim: 100 exec/s: 40 rss: 75Mb L: 97/98 MS: 1 ChangeBit- 00:12:46.159 [2024-10-14 17:29:43.152937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321005 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.152965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.153070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.153089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.159 #41 NEW cov: 12520 ft: 14927 corp: 25/1888b lim: 100 exec/s: 41 rss: 75Mb L: 57/98 MS: 1 ChangeByte- 00:12:46.159 [2024-10-14 17:29:43.224044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.224080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.224196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.224215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.224310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.224327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:46.159 [2024-10-14 17:29:43.224430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.159 [2024-10-14 17:29:43.224447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:46.419 #42 NEW cov: 12520 ft: 14945 corp: 26/1979b lim: 100 exec/s: 42 rss: 75Mb L: 91/98 MS: 1 ChangeByte- 00:12:46.419 [2024-10-14 17:29:43.294106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.294135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.294216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.294235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.294299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.294318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:46.419 #43 NEW cov: 12520 ft: 14952 corp: 27/2056b lim: 100 exec/s: 43 rss: 75Mb L: 77/98 MS: 1 InsertByte- 00:12:46.419 [2024-10-14 17:29:43.364648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.364676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.364750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.364769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.364848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13599952493558414524 len:48317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.364866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:46.419 #44 NEW cov: 12520 ft: 14979 corp: 28/2133b lim: 100 exec/s: 44 rss: 75Mb L: 77/98 MS: 1 InsertByte- 00:12:46.419 [2024-10-14 17:29:43.414612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321146 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.414641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.414700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709510911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.414718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.419 #45 NEW cov: 12520 ft: 15005 corp: 29/2191b lim: 100 exec/s: 45 rss: 75Mb L: 58/98 MS: 1 InsertByte- 00:12:46.419 [2024-10-14 17:29:43.465503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13455348310537321005 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.465533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.465613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.465632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.465712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9982943851654580874 len:35467 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.465730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:12:46.419 [2024-10-14 17:29:43.465818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:47803 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:12:46.419 [2024-10-14 17:29:43.465837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:12:46.419 #46 NEW cov: 12520 ft: 15016 corp: 30/2272b lim: 100 exec/s: 23 rss: 75Mb L: 81/98 MS: 1 InsertRepeatedBytes- 00:12:46.419 #46 DONE cov: 12520 ft: 15016 corp: 30/2272b lim: 100 exec/s: 23 rss: 75Mb 00:12:46.419 ###### Recommended dictionary. ###### 00:12:46.419 "\031\001" # Uses: 2 00:12:46.419 ###### End of recommended dictionary. ###### 00:12:46.419 Done 46 runs in 2 second(s) 00:12:46.679 17:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:12:46.679 17:29:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:46.679 17:29:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:46.679 17:29:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:12:46.679 00:12:46.679 real 1m3.830s 00:12:46.679 user 1m39.778s 00:12:46.679 sys 0m7.649s 00:12:46.679 17:29:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.679 17:29:43 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 ************************************ 00:12:46.679 END TEST nvmf_llvm_fuzz 00:12:46.679 ************************************ 00:12:46.679 17:29:43 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:12:46.679 17:29:43 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:12:46.679 17:29:43 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:12:46.679 17:29:43 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:46.679 17:29:43 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.679 17:29:43 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 ************************************ 00:12:46.679 START TEST vfio_llvm_fuzz 00:12:46.679 ************************************ 00:12:46.679 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:12:46.942 * Looking for test storage... 00:12:46.942 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.942 --rc genhtml_branch_coverage=1 00:12:46.942 --rc genhtml_function_coverage=1 00:12:46.942 --rc genhtml_legend=1 00:12:46.942 --rc geninfo_all_blocks=1 00:12:46.942 --rc geninfo_unexecuted_blocks=1 00:12:46.942 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:46.942 ' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.942 --rc genhtml_branch_coverage=1 00:12:46.942 --rc genhtml_function_coverage=1 00:12:46.942 --rc genhtml_legend=1 00:12:46.942 --rc geninfo_all_blocks=1 00:12:46.942 --rc geninfo_unexecuted_blocks=1 00:12:46.942 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:46.942 ' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.942 --rc genhtml_branch_coverage=1 00:12:46.942 --rc genhtml_function_coverage=1 00:12:46.942 --rc genhtml_legend=1 00:12:46.942 --rc geninfo_all_blocks=1 00:12:46.942 --rc geninfo_unexecuted_blocks=1 00:12:46.942 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:46.942 ' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.942 --rc genhtml_branch_coverage=1 00:12:46.942 --rc genhtml_function_coverage=1 00:12:46.942 --rc genhtml_legend=1 00:12:46.942 --rc geninfo_all_blocks=1 00:12:46.942 --rc geninfo_unexecuted_blocks=1 00:12:46.942 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:46.942 ' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:46.942 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:12:46.943 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:46.943 #define SPDK_CONFIG_H 00:12:46.943 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:46.943 #define SPDK_CONFIG_APPS 1 00:12:46.943 #define SPDK_CONFIG_ARCH native 00:12:46.943 #undef SPDK_CONFIG_ASAN 00:12:46.943 #undef SPDK_CONFIG_AVAHI 00:12:46.943 #undef SPDK_CONFIG_CET 00:12:46.943 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:46.943 #define SPDK_CONFIG_COVERAGE 1 00:12:46.943 #define SPDK_CONFIG_CROSS_PREFIX 00:12:46.943 #undef SPDK_CONFIG_CRYPTO 00:12:46.943 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:46.943 #undef SPDK_CONFIG_CUSTOMOCF 00:12:46.943 #undef SPDK_CONFIG_DAOS 00:12:46.943 #define SPDK_CONFIG_DAOS_DIR 00:12:46.943 #define SPDK_CONFIG_DEBUG 1 00:12:46.943 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:46.943 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:12:46.943 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:46.943 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:46.943 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:46.943 #undef SPDK_CONFIG_DPDK_UADK 00:12:46.943 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:12:46.943 #define SPDK_CONFIG_EXAMPLES 1 00:12:46.943 #undef SPDK_CONFIG_FC 00:12:46.943 #define SPDK_CONFIG_FC_PATH 00:12:46.944 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:46.944 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:46.944 #define SPDK_CONFIG_FSDEV 1 00:12:46.944 #undef SPDK_CONFIG_FUSE 00:12:46.944 #define SPDK_CONFIG_FUZZER 1 00:12:46.944 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:12:46.944 #undef SPDK_CONFIG_GOLANG 00:12:46.944 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:46.944 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:46.944 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:46.944 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:46.944 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:46.944 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:46.944 #undef SPDK_CONFIG_HAVE_LZ4 00:12:46.944 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:46.944 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:46.944 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:46.944 #define SPDK_CONFIG_IDXD 1 00:12:46.944 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:46.944 #undef SPDK_CONFIG_IPSEC_MB 00:12:46.944 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:46.944 #define SPDK_CONFIG_ISAL 1 00:12:46.944 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:46.944 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:46.944 #define SPDK_CONFIG_LIBDIR 00:12:46.944 #undef SPDK_CONFIG_LTO 00:12:46.944 #define SPDK_CONFIG_MAX_LCORES 128 00:12:46.944 #define SPDK_CONFIG_NVME_CUSE 1 00:12:46.944 #undef SPDK_CONFIG_OCF 00:12:46.944 #define SPDK_CONFIG_OCF_PATH 00:12:46.944 #define SPDK_CONFIG_OPENSSL_PATH 00:12:46.944 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:46.944 #define SPDK_CONFIG_PGO_DIR 00:12:46.944 #undef SPDK_CONFIG_PGO_USE 00:12:46.944 #define SPDK_CONFIG_PREFIX /usr/local 00:12:46.944 #undef SPDK_CONFIG_RAID5F 00:12:46.944 #undef SPDK_CONFIG_RBD 00:12:46.944 #define SPDK_CONFIG_RDMA 1 00:12:46.944 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:46.944 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:46.944 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:46.944 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:46.944 #undef SPDK_CONFIG_SHARED 00:12:46.944 #undef SPDK_CONFIG_SMA 00:12:46.944 #define SPDK_CONFIG_TESTS 1 00:12:46.944 #undef SPDK_CONFIG_TSAN 00:12:46.944 #define SPDK_CONFIG_UBLK 1 00:12:46.944 #define SPDK_CONFIG_UBSAN 1 00:12:46.944 #undef SPDK_CONFIG_UNIT_TESTS 00:12:46.944 #undef SPDK_CONFIG_URING 00:12:46.944 #define SPDK_CONFIG_URING_PATH 00:12:46.944 #undef SPDK_CONFIG_URING_ZNS 00:12:46.944 #undef SPDK_CONFIG_USDT 00:12:46.944 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:46.944 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:46.944 #define SPDK_CONFIG_VFIO_USER 1 00:12:46.944 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:46.944 #define SPDK_CONFIG_VHOST 1 00:12:46.944 #define SPDK_CONFIG_VIRTIO 1 00:12:46.944 #undef SPDK_CONFIG_VTUNE 00:12:46.944 #define SPDK_CONFIG_VTUNE_DIR 00:12:46.944 #define SPDK_CONFIG_WERROR 1 00:12:46.944 #define SPDK_CONFIG_WPDK_DIR 00:12:46.944 #undef SPDK_CONFIG_XNVME 00:12:46.944 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:46.944 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:46.945 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 2111953 ]] 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 2111953 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:46.946 17:29:43 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.IZbPKz 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.IZbPKz/tests/vfio /tmp/spdk.IZbPKz 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86850031616 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500372480 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=7650340864 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245422592 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894340096 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900074496 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5734400 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.946 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249846272 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250186240 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=339968 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450024960 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450037248 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:46.947 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:46.947 * Looking for test storage... 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86850031616 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=9864933376 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:47.207 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.207 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:47.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.207 --rc genhtml_branch_coverage=1 00:12:47.207 --rc genhtml_function_coverage=1 00:12:47.207 --rc genhtml_legend=1 00:12:47.207 --rc geninfo_all_blocks=1 00:12:47.207 --rc geninfo_unexecuted_blocks=1 00:12:47.207 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:47.207 ' 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:47.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.208 --rc genhtml_branch_coverage=1 00:12:47.208 --rc genhtml_function_coverage=1 00:12:47.208 --rc genhtml_legend=1 00:12:47.208 --rc geninfo_all_blocks=1 00:12:47.208 --rc geninfo_unexecuted_blocks=1 00:12:47.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:47.208 ' 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:47.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.208 --rc genhtml_branch_coverage=1 00:12:47.208 --rc genhtml_function_coverage=1 00:12:47.208 --rc genhtml_legend=1 00:12:47.208 --rc geninfo_all_blocks=1 00:12:47.208 --rc geninfo_unexecuted_blocks=1 00:12:47.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:47.208 ' 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:47.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.208 --rc genhtml_branch_coverage=1 00:12:47.208 --rc genhtml_function_coverage=1 00:12:47.208 --rc genhtml_legend=1 00:12:47.208 --rc geninfo_all_blocks=1 00:12:47.208 --rc geninfo_unexecuted_blocks=1 00:12:47.208 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:47.208 ' 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:12:47.208 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:47.208 17:29:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:12:47.208 [2024-10-14 17:29:44.193717] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:47.208 [2024-10-14 17:29:44.193795] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112174 ] 00:12:47.208 [2024-10-14 17:29:44.279076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.467 [2024-10-14 17:29:44.326655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.467 INFO: Running with entropic power schedule (0xFF, 100). 00:12:47.467 INFO: Seed: 1096561683 00:12:47.467 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:12:47.467 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:12:47.467 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:12:47.467 INFO: A corpus is not provided, starting from an empty corpus 00:12:47.467 #2 INITED exec/s: 0 rss: 68Mb 00:12:47.467 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:47.467 This may also happen if the target rejected all inputs we tried so far 00:12:47.727 [2024-10-14 17:29:44.573488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:12:47.986 NEW_FUNC[1/671]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:12:47.986 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:47.986 #6 NEW cov: 11154 ft: 11079 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 InsertByte-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:12:48.245 #12 NEW cov: 11168 ft: 14588 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 CMP- DE: " \000\000\000"- 00:12:48.245 #13 NEW cov: 11168 ft: 15522 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:12:48.503 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:48.503 #16 NEW cov: 11185 ft: 15630 corp: 5/25b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 3 InsertByte-ChangeBit-PersAutoDict- DE: " \000\000\000"- 00:12:48.503 #17 NEW cov: 11185 ft: 16013 corp: 6/31b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:12:48.762 #18 NEW cov: 11185 ft: 16418 corp: 7/37b lim: 6 exec/s: 18 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:12:48.762 #19 NEW cov: 11185 ft: 16481 corp: 8/43b lim: 6 exec/s: 19 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:12:49.021 #20 NEW cov: 11185 ft: 16681 corp: 9/49b lim: 6 exec/s: 20 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:12:49.021 #21 NEW cov: 11185 ft: 16699 corp: 10/55b lim: 6 exec/s: 21 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:12:49.279 #22 NEW cov: 11185 ft: 16788 corp: 11/61b lim: 6 exec/s: 22 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:12:49.279 #23 NEW cov: 11195 ft: 17094 corp: 12/67b lim: 6 exec/s: 23 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:12:49.279 #24 NEW cov: 11202 ft: 17153 corp: 13/73b lim: 6 exec/s: 24 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:12:49.538 #25 NEW cov: 11202 ft: 17265 corp: 14/79b lim: 6 exec/s: 25 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:12:49.538 #26 NEW cov: 11202 ft: 17317 corp: 15/85b lim: 6 exec/s: 13 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:12:49.538 #26 DONE cov: 11202 ft: 17317 corp: 15/85b lim: 6 exec/s: 13 rss: 77Mb 00:12:49.538 ###### Recommended dictionary. ###### 00:12:49.538 " \000\000\000" # Uses: 1 00:12:49.538 ###### End of recommended dictionary. ###### 00:12:49.538 Done 26 runs in 2 second(s) 00:12:49.538 [2024-10-14 17:29:46.623230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:12:49.797 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:12:49.798 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:49.798 17:29:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:12:49.798 [2024-10-14 17:29:46.888032] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:49.798 [2024-10-14 17:29:46.888104] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112528 ] 00:12:50.057 [2024-10-14 17:29:46.971884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.057 [2024-10-14 17:29:47.016882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.317 INFO: Running with entropic power schedule (0xFF, 100). 00:12:50.317 INFO: Seed: 3788560733 00:12:50.317 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:12:50.317 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:12:50.317 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:12:50.317 INFO: A corpus is not provided, starting from an empty corpus 00:12:50.317 #2 INITED exec/s: 0 rss: 68Mb 00:12:50.317 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:50.317 This may also happen if the target rejected all inputs we tried so far 00:12:50.317 [2024-10-14 17:29:47.266522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:12:50.317 [2024-10-14 17:29:47.339883] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:50.317 [2024-10-14 17:29:47.339910] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:50.317 [2024-10-14 17:29:47.339929] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:50.835 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:12:50.835 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:50.835 #12 NEW cov: 11150 ft: 11113 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 CopyPart-ChangeBinInt-CrossOver-ChangeBit-InsertByte- 00:12:50.835 [2024-10-14 17:29:47.860617] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:50.835 [2024-10-14 17:29:47.860659] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:50.835 [2024-10-14 17:29:47.860678] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:51.093 #28 NEW cov: 11164 ft: 14413 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:12:51.093 [2024-10-14 17:29:48.069362] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:51.093 [2024-10-14 17:29:48.069386] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:51.093 [2024-10-14 17:29:48.069420] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:51.351 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:51.351 #39 NEW cov: 11181 ft: 15467 corp: 4/13b lim: 4 exec/s: 0 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:12:51.351 [2024-10-14 17:29:48.285369] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:51.351 [2024-10-14 17:29:48.285393] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:51.351 [2024-10-14 17:29:48.285411] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:51.351 #40 NEW cov: 11181 ft: 15795 corp: 5/17b lim: 4 exec/s: 40 rss: 77Mb L: 4/4 MS: 1 CMP- DE: "\002\000"- 00:12:51.610 [2024-10-14 17:29:48.484528] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:51.610 [2024-10-14 17:29:48.484550] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:51.610 [2024-10-14 17:29:48.484568] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:51.610 #41 NEW cov: 11181 ft: 16220 corp: 6/21b lim: 4 exec/s: 41 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:12:51.610 [2024-10-14 17:29:48.688066] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:51.610 [2024-10-14 17:29:48.688089] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:51.610 [2024-10-14 17:29:48.688106] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:51.868 #43 NEW cov: 11181 ft: 16586 corp: 7/25b lim: 4 exec/s: 43 rss: 77Mb L: 4/4 MS: 2 InsertByte-CrossOver- 00:12:51.868 [2024-10-14 17:29:48.903948] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:51.868 [2024-10-14 17:29:48.903971] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:51.868 [2024-10-14 17:29:48.904004] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:52.127 #44 NEW cov: 11181 ft: 16842 corp: 8/29b lim: 4 exec/s: 44 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:12:52.127 [2024-10-14 17:29:49.097739] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:52.127 [2024-10-14 17:29:49.097762] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:52.127 [2024-10-14 17:29:49.097779] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:52.127 #49 NEW cov: 11188 ft: 17652 corp: 9/33b lim: 4 exec/s: 49 rss: 77Mb L: 4/4 MS: 5 EraseBytes-PersAutoDict-ChangeBit-ShuffleBytes-InsertByte- DE: "\002\000"- 00:12:52.386 [2024-10-14 17:29:49.296938] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:12:52.386 [2024-10-14 17:29:49.296961] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:12:52.386 [2024-10-14 17:29:49.296979] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:12:52.386 #50 NEW cov: 11188 ft: 17926 corp: 10/37b lim: 4 exec/s: 25 rss: 77Mb L: 4/4 MS: 1 ChangeBinInt- 00:12:52.386 #50 DONE cov: 11188 ft: 17926 corp: 10/37b lim: 4 exec/s: 25 rss: 77Mb 00:12:52.386 ###### Recommended dictionary. ###### 00:12:52.386 "\002\000" # Uses: 1 00:12:52.386 ###### End of recommended dictionary. ###### 00:12:52.386 Done 50 runs in 2 second(s) 00:12:52.386 [2024-10-14 17:29:49.436234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:12:52.645 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:52.645 17:29:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:12:52.645 [2024-10-14 17:29:49.700309] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:52.645 [2024-10-14 17:29:49.700377] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112896 ] 00:12:52.904 [2024-10-14 17:29:49.784560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.904 [2024-10-14 17:29:49.828599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.164 INFO: Running with entropic power schedule (0xFF, 100). 00:12:53.164 INFO: Seed: 2302608969 00:12:53.164 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:12:53.164 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:12:53.164 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:12:53.164 INFO: A corpus is not provided, starting from an empty corpus 00:12:53.164 #2 INITED exec/s: 0 rss: 68Mb 00:12:53.164 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:53.164 This may also happen if the target rejected all inputs we tried so far 00:12:53.164 [2024-10-14 17:29:50.074590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:12:53.164 [2024-10-14 17:29:50.097772] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:53.423 NEW_FUNC[1/672]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:12:53.423 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:53.423 #3 NEW cov: 11133 ft: 11070 corp: 2/9b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:12:53.682 [2024-10-14 17:29:50.535998] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:53.682 #9 NEW cov: 11147 ft: 13808 corp: 3/17b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:12:53.682 [2024-10-14 17:29:50.658715] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:53.682 #10 NEW cov: 11147 ft: 14048 corp: 4/25b lim: 8 exec/s: 0 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:12:53.941 [2024-10-14 17:29:50.784501] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:53.941 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:53.941 #11 NEW cov: 11164 ft: 15156 corp: 5/33b lim: 8 exec/s: 0 rss: 77Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:12:53.941 [2024-10-14 17:29:50.918646] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:53.941 #17 NEW cov: 11164 ft: 15961 corp: 6/41b lim: 8 exec/s: 0 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:12:54.200 [2024-10-14 17:29:51.041804] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.200 #18 NEW cov: 11164 ft: 16665 corp: 7/49b lim: 8 exec/s: 18 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:12:54.200 [2024-10-14 17:29:51.154218] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.200 #19 NEW cov: 11164 ft: 17394 corp: 8/57b lim: 8 exec/s: 19 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:12:54.200 [2024-10-14 17:29:51.276448] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.458 #20 NEW cov: 11164 ft: 17475 corp: 9/65b lim: 8 exec/s: 20 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:12:54.458 [2024-10-14 17:29:51.398351] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.458 #21 NEW cov: 11164 ft: 17870 corp: 10/73b lim: 8 exec/s: 21 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:12:54.458 [2024-10-14 17:29:51.520567] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.717 #27 NEW cov: 11164 ft: 18039 corp: 11/81b lim: 8 exec/s: 27 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:12:54.717 [2024-10-14 17:29:51.633865] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.717 #35 NEW cov: 11164 ft: 18082 corp: 12/89b lim: 8 exec/s: 35 rss: 77Mb L: 8/8 MS: 3 EraseBytes-CrossOver-CrossOver- 00:12:54.717 [2024-10-14 17:29:51.746120] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.976 #36 NEW cov: 11164 ft: 18120 corp: 13/97b lim: 8 exec/s: 36 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:12:54.976 [2024-10-14 17:29:51.858851] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.976 #42 NEW cov: 11171 ft: 18137 corp: 14/105b lim: 8 exec/s: 42 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:12:54.976 [2024-10-14 17:29:51.982204] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:12:54.976 #48 NEW cov: 11171 ft: 18181 corp: 15/113b lim: 8 exec/s: 24 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:12:54.976 #48 DONE cov: 11171 ft: 18181 corp: 15/113b lim: 8 exec/s: 24 rss: 77Mb 00:12:54.976 Done 48 runs in 2 second(s) 00:12:55.235 [2024-10-14 17:29:52.068253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:12:55.235 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:55.235 17:29:52 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:12:55.495 [2024-10-14 17:29:52.344677] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:55.495 [2024-10-14 17:29:52.344748] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113256 ] 00:12:55.495 [2024-10-14 17:29:52.428716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.495 [2024-10-14 17:29:52.473347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.753 INFO: Running with entropic power schedule (0xFF, 100). 00:12:55.753 INFO: Seed: 657626647 00:12:55.753 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:12:55.753 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:12:55.753 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:12:55.753 INFO: A corpus is not provided, starting from an empty corpus 00:12:55.753 #2 INITED exec/s: 0 rss: 68Mb 00:12:55.753 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:55.753 This may also happen if the target rejected all inputs we tried so far 00:12:55.753 [2024-10-14 17:29:52.726524] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:12:56.271 NEW_FUNC[1/672]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:12:56.271 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:56.271 #117 NEW cov: 11138 ft: 10960 corp: 2/33b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 5 CrossOver-EraseBytes-ShuffleBytes-InsertRepeatedBytes-CopyPart- 00:12:56.531 #118 NEW cov: 11152 ft: 13915 corp: 3/65b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:12:56.790 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:56.790 #119 NEW cov: 11169 ft: 15091 corp: 4/97b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:57.049 #120 NEW cov: 11169 ft: 16250 corp: 5/129b lim: 32 exec/s: 120 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:57.049 #121 NEW cov: 11172 ft: 16750 corp: 6/161b lim: 32 exec/s: 121 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:57.308 #122 NEW cov: 11172 ft: 16980 corp: 7/193b lim: 32 exec/s: 122 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:12:57.567 #128 NEW cov: 11179 ft: 17296 corp: 8/225b lim: 32 exec/s: 128 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:12:57.842 #129 NEW cov: 11179 ft: 17668 corp: 9/257b lim: 32 exec/s: 64 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:12:57.842 #129 DONE cov: 11179 ft: 17668 corp: 9/257b lim: 32 exec/s: 64 rss: 77Mb 00:12:57.842 Done 129 runs in 2 second(s) 00:12:57.842 [2024-10-14 17:29:54.810232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:12:58.102 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:12:58.102 17:29:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:12:58.102 [2024-10-14 17:29:55.075981] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:12:58.102 [2024-10-14 17:29:55.076059] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113611 ] 00:12:58.102 [2024-10-14 17:29:55.160728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.361 [2024-10-14 17:29:55.206446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.361 INFO: Running with entropic power schedule (0xFF, 100). 00:12:58.361 INFO: Seed: 3387625884 00:12:58.361 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:12:58.361 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:12:58.361 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:12:58.361 INFO: A corpus is not provided, starting from an empty corpus 00:12:58.361 #2 INITED exec/s: 0 rss: 68Mb 00:12:58.361 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:12:58.361 This may also happen if the target rejected all inputs we tried so far 00:12:58.361 [2024-10-14 17:29:55.449279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:12:58.621 [2024-10-14 17:29:55.473089] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=300 offset=0xa00000000000000 prot=0x3: Invalid argument 00:12:58.621 [2024-10-14 17:29:55.473113] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:12:58.621 [2024-10-14 17:29:55.473123] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:12:58.621 [2024-10-14 17:29:55.473157] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:58.621 [2024-10-14 17:29:55.474084] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:12:58.621 [2024-10-14 17:29:55.474101] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:58.621 [2024-10-14 17:29:55.474125] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:58.880 NEW_FUNC[1/673]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:12:58.880 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:12:58.880 #144 NEW cov: 11149 ft: 10737 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:12:58.880 [2024-10-14 17:29:55.914280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 18014398509481984 > max 8796093022208 00:12:58.880 [2024-10-14 17:29:55.914316] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x40000000000000) offset=0xa00000000000000 flags=0x3: No space left on device 00:12:58.880 [2024-10-14 17:29:55.914328] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:12:58.880 [2024-10-14 17:29:55.914364] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:58.880 [2024-10-14 17:29:55.915276] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x40000000000000) flags=0: No such file or directory 00:12:58.880 [2024-10-14 17:29:55.915299] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:58.880 [2024-10-14 17:29:55.915318] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.139 #155 NEW cov: 11163 ft: 14006 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:12:59.139 [2024-10-14 17:29:56.039583] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), 0xa00) fd=302 offset=0xa00000000000000 prot=0x3: Permission denied 00:12:59.139 [2024-10-14 17:29:56.039609] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0xa00) offset=0xa00000000000000 flags=0x3: Permission denied 00:12:59.139 [2024-10-14 17:29:56.039620] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:12:59.139 [2024-10-14 17:29:56.039655] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.139 [2024-10-14 17:29:56.040578] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0xa00) flags=0: No such file or directory 00:12:59.139 [2024-10-14 17:29:56.040602] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.139 [2024-10-14 17:29:56.040619] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.139 #156 NEW cov: 11166 ft: 15310 corp: 4/97b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:12:59.139 [2024-10-14 17:29:56.163874] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), 0xfe00) fd=302 offset=0xa00000000000000 prot=0x3: Permission denied 00:12:59.139 [2024-10-14 17:29:56.163900] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0xfe00) offset=0xa00000000000000 flags=0x3: Permission denied 00:12:59.139 [2024-10-14 17:29:56.163931] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:12:59.139 [2024-10-14 17:29:56.163950] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.139 [2024-10-14 17:29:56.164902] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0xfe00) flags=0: No such file or directory 00:12:59.139 [2024-10-14 17:29:56.164925] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.139 [2024-10-14 17:29:56.164944] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.398 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:12:59.398 #162 NEW cov: 11183 ft: 15807 corp: 5/129b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:59.398 [2024-10-14 17:29:56.289021] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0xa00000000000000 prot=0x3: Invalid argument 00:12:59.398 [2024-10-14 17:29:56.289052] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0xa00000000000000 flags=0x3: Invalid argument 00:12:59.398 [2024-10-14 17:29:56.289063] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:12:59.398 [2024-10-14 17:29:56.289082] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.398 [2024-10-14 17:29:56.290042] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:12:59.398 [2024-10-14 17:29:56.290063] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.398 [2024-10-14 17:29:56.290081] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.398 #168 NEW cov: 11183 ft: 16200 corp: 6/161b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:12:59.398 [2024-10-14 17:29:56.413257] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0xa00000900000000 prot=0x3: Invalid argument 00:12:59.398 [2024-10-14 17:29:56.413282] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0xa00000900000000 flags=0x3: Invalid argument 00:12:59.398 [2024-10-14 17:29:56.413293] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:12:59.398 [2024-10-14 17:29:56.413326] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.398 [2024-10-14 17:29:56.414289] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:12:59.398 [2024-10-14 17:29:56.414311] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.398 [2024-10-14 17:29:56.414329] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.398 #169 NEW cov: 11183 ft: 16596 corp: 7/193b lim: 32 exec/s: 169 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:59.657 [2024-10-14 17:29:56.527606] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 18015480841240575 > max 8796093022208 00:12:59.657 [2024-10-14 17:29:56.527631] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xffffff0000000000, 0x3ffffbffffffff) offset=0xa00000000000000 flags=0x3: No space left on device 00:12:59.657 [2024-10-14 17:29:56.527643] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:12:59.657 [2024-10-14 17:29:56.527676] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.657 [2024-10-14 17:29:56.528621] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xffffff0000000000, 0x3ffffbffffffff) flags=0: No such file or directory 00:12:59.657 [2024-10-14 17:29:56.528644] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.657 [2024-10-14 17:29:56.528666] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.657 #175 NEW cov: 11183 ft: 17104 corp: 8/225b lim: 32 exec/s: 175 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:12:59.657 [2024-10-14 17:29:56.641869] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 18374686479671623680 > max 8796093022208 00:12:59.657 [2024-10-14 17:29:56.641894] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0xff00000000000000) offset=0xa000009000000ff flags=0x3: No space left on device 00:12:59.657 [2024-10-14 17:29:56.641905] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:12:59.657 [2024-10-14 17:29:56.641939] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.657 [2024-10-14 17:29:56.642891] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0xff00000000000000) flags=0: No such file or directory 00:12:59.657 [2024-10-14 17:29:56.642914] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.657 [2024-10-14 17:29:56.642933] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.657 #176 NEW cov: 11183 ft: 17271 corp: 9/257b lim: 32 exec/s: 176 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:12:59.916 [2024-10-14 17:29:56.766120] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0xa00000900000000 prot=0x3: Invalid argument 00:12:59.916 [2024-10-14 17:29:56.766145] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0xa00000900000000 flags=0x3: Invalid argument 00:12:59.916 [2024-10-14 17:29:56.766155] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:12:59.916 [2024-10-14 17:29:56.766190] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.916 [2024-10-14 17:29:56.767130] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:12:59.916 [2024-10-14 17:29:56.767153] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.916 [2024-10-14 17:29:56.767170] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.917 #177 NEW cov: 11183 ft: 17550 corp: 10/289b lim: 32 exec/s: 177 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:12:59.917 [2024-10-14 17:29:56.890194] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), 0x200000) fd=302 offset=0xa00000000000000 prot=0x3: Permission denied 00:12:59.917 [2024-10-14 17:29:56.890220] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x200000) offset=0xa00000000000000 flags=0x3: Permission denied 00:12:59.917 [2024-10-14 17:29:56.890231] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:12:59.917 [2024-10-14 17:29:56.890250] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.917 [2024-10-14 17:29:56.891219] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x200000) flags=0: No such file or directory 00:12:59.917 [2024-10-14 17:29:56.891242] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.917 [2024-10-14 17:29:56.891260] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:12:59.917 #178 NEW cov: 11183 ft: 17593 corp: 11/321b lim: 32 exec/s: 178 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:12:59.917 [2024-10-14 17:29:57.004514] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), 0x2d00) fd=302 offset=0xa00000900000000 prot=0x3: Permission denied 00:12:59.917 [2024-10-14 17:29:57.004541] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x2d00) offset=0xa00000900000000 flags=0x3: Permission denied 00:12:59.917 [2024-10-14 17:29:57.004555] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:12:59.917 [2024-10-14 17:29:57.004578] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:12:59.917 [2024-10-14 17:29:57.005502] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x2d00) flags=0: No such file or directory 00:12:59.917 [2024-10-14 17:29:57.005525] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:12:59.917 [2024-10-14 17:29:57.005543] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:13:00.176 #179 NEW cov: 11183 ft: 17883 corp: 12/353b lim: 32 exec/s: 179 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:13:00.176 [2024-10-14 17:29:57.118607] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), 0x200000) fd=302 offset=0xa40000000000000 prot=0x3: Permission denied 00:13:00.176 [2024-10-14 17:29:57.118632] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x200000) offset=0xa40000000000000 flags=0x3: Permission denied 00:13:00.176 [2024-10-14 17:29:57.118643] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:13:00.176 [2024-10-14 17:29:57.118677] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:13:00.176 [2024-10-14 17:29:57.119598] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x200000) flags=0: No such file or directory 00:13:00.176 [2024-10-14 17:29:57.119621] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:13:00.176 [2024-10-14 17:29:57.119640] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:13:00.176 #185 NEW cov: 11183 ft: 18365 corp: 13/385b lim: 32 exec/s: 185 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:13:00.435 #190 NEW cov: 11194 ft: 18407 corp: 14/417b lim: 32 exec/s: 190 rss: 77Mb L: 32/32 MS: 5 CrossOver-ChangeBit-ChangeBit-ChangeByte-CopyPart- 00:13:00.435 [2024-10-14 17:29:57.367074] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x1f000000, 0x2d29000000) fd=302 offset=0 prot=0x3: Permission denied 00:13:00.435 [2024-10-14 17:29:57.367100] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x1f000000, 0x2d29000000) offset=0 flags=0x3: Permission denied 00:13:00.435 [2024-10-14 17:29:57.367111] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:13:00.435 [2024-10-14 17:29:57.367129] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:13:00.435 [2024-10-14 17:29:57.368094] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x1f000000, 0x2d29000000) flags=0: No such file or directory 00:13:00.435 [2024-10-14 17:29:57.368117] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:13:00.435 [2024-10-14 17:29:57.368136] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:13:00.435 #215 NEW cov: 11194 ft: 18425 corp: 15/449b lim: 32 exec/s: 107 rss: 77Mb L: 32/32 MS: 5 CrossOver-ChangeByte-ShuffleBytes-ChangeBinInt-InsertByte- 00:13:00.435 #215 DONE cov: 11194 ft: 18425 corp: 15/449b lim: 32 exec/s: 107 rss: 77Mb 00:13:00.435 Done 215 runs in 2 second(s) 00:13:00.435 [2024-10-14 17:29:57.459235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:13:00.695 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:13:00.695 17:29:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:13:00.695 [2024-10-14 17:29:57.728425] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:13:00.695 [2024-10-14 17:29:57.728498] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113972 ] 00:13:00.955 [2024-10-14 17:29:57.813431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.955 [2024-10-14 17:29:57.858423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.216 INFO: Running with entropic power schedule (0xFF, 100). 00:13:01.216 INFO: Seed: 1743672991 00:13:01.216 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:13:01.216 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:13:01.216 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:13:01.216 INFO: A corpus is not provided, starting from an empty corpus 00:13:01.216 #2 INITED exec/s: 0 rss: 68Mb 00:13:01.216 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:13:01.216 This may also happen if the target rejected all inputs we tried so far 00:13:01.216 [2024-10-14 17:29:58.099891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:13:01.216 [2024-10-14 17:29:58.179014] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:01.216 [2024-10-14 17:29:58.179059] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:01.735 NEW_FUNC[1/673]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:13:01.735 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:13:01.735 #15 NEW cov: 11149 ft: 11109 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 3 ChangeBit-InsertRepeatedBytes-InsertByte- 00:13:01.735 [2024-10-14 17:29:58.713349] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:01.735 [2024-10-14 17:29:58.713391] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:01.994 #21 NEW cov: 11163 ft: 13244 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CMP- DE: "\020\000"- 00:13:01.994 [2024-10-14 17:29:58.928457] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:01.994 [2024-10-14 17:29:58.928489] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:01.994 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:13:01.994 #27 NEW cov: 11183 ft: 14391 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:13:02.252 [2024-10-14 17:29:59.139526] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:02.252 [2024-10-14 17:29:59.139558] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:02.252 #33 NEW cov: 11183 ft: 15078 corp: 5/53b lim: 13 exec/s: 33 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:13:02.511 [2024-10-14 17:29:59.352136] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:02.511 [2024-10-14 17:29:59.352167] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:02.511 #44 NEW cov: 11183 ft: 15341 corp: 6/66b lim: 13 exec/s: 44 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:13:02.511 [2024-10-14 17:29:59.561149] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:02.511 [2024-10-14 17:29:59.561180] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:02.770 #45 NEW cov: 11183 ft: 15525 corp: 7/79b lim: 13 exec/s: 45 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:13:02.770 [2024-10-14 17:29:59.772805] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:02.770 [2024-10-14 17:29:59.772836] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:03.029 #46 NEW cov: 11190 ft: 15664 corp: 8/92b lim: 13 exec/s: 46 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:13:03.029 [2024-10-14 17:29:59.987745] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:03.029 [2024-10-14 17:29:59.987777] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:03.029 #49 NEW cov: 11190 ft: 16008 corp: 9/105b lim: 13 exec/s: 24 rss: 77Mb L: 13/13 MS: 3 EraseBytes-ChangeByte-CopyPart- 00:13:03.029 #49 DONE cov: 11190 ft: 16008 corp: 9/105b lim: 13 exec/s: 24 rss: 77Mb 00:13:03.029 ###### Recommended dictionary. ###### 00:13:03.029 "\020\000" # Uses: 1 00:13:03.029 ###### End of recommended dictionary. ###### 00:13:03.029 Done 49 runs in 2 second(s) 00:13:03.289 [2024-10-14 17:30:00.129899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:13:03.289 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:13:03.289 17:30:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:13:03.549 [2024-10-14 17:30:00.399423] Starting SPDK v25.01-pre git sha1 f1e77dead / DPDK 24.03.0 initialization... 00:13:03.549 [2024-10-14 17:30:00.399511] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114363 ] 00:13:03.549 [2024-10-14 17:30:00.487563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.549 [2024-10-14 17:30:00.533161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.809 INFO: Running with entropic power schedule (0xFF, 100). 00:13:03.809 INFO: Seed: 121701704 00:13:03.809 INFO: Loaded 1 modules (382606 inline 8-bit counters): 382606 [0x2bc560c, 0x2c22c9a), 00:13:03.809 INFO: Loaded 1 PC tables (382606 PCs): 382606 [0x2c22ca0,0x31f9580), 00:13:03.809 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:13:03.809 INFO: A corpus is not provided, starting from an empty corpus 00:13:03.809 #2 INITED exec/s: 0 rss: 68Mb 00:13:03.809 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:13:03.809 This may also happen if the target rejected all inputs we tried so far 00:13:03.809 [2024-10-14 17:30:00.783668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:13:03.809 [2024-10-14 17:30:00.864245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:03.809 [2024-10-14 17:30:00.864282] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:04.328 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:13:04.328 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:13:04.328 #21 NEW cov: 11141 ft: 11065 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 InsertRepeatedBytes-ChangeBinInt-ShuffleBytes-CrossOver- 00:13:04.328 [2024-10-14 17:30:01.384241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:04.328 [2024-10-14 17:30:01.384285] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:04.587 #31 NEW cov: 11158 ft: 14136 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 5 ChangeBit-ChangeByte-CrossOver-CrossOver-CrossOver- 00:13:04.587 [2024-10-14 17:30:01.598385] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:04.587 [2024-10-14 17:30:01.598421] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:04.846 NEW_FUNC[1/1]: 0x1bd5aa8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:13:04.846 #35 NEW cov: 11175 ft: 15152 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 4 ChangeByte-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes- 00:13:04.846 [2024-10-14 17:30:01.797843] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:04.846 [2024-10-14 17:30:01.797875] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:04.846 #36 NEW cov: 11175 ft: 16411 corp: 5/37b lim: 9 exec/s: 36 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:13:05.105 [2024-10-14 17:30:02.000059] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:05.105 [2024-10-14 17:30:02.000093] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:05.105 #37 NEW cov: 11175 ft: 16971 corp: 6/46b lim: 9 exec/s: 37 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:13:05.364 [2024-10-14 17:30:02.199722] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:05.364 [2024-10-14 17:30:02.199753] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:05.364 #38 NEW cov: 11175 ft: 17342 corp: 7/55b lim: 9 exec/s: 38 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:13:05.364 [2024-10-14 17:30:02.397506] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:05.364 [2024-10-14 17:30:02.397537] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:05.622 #39 NEW cov: 11175 ft: 17500 corp: 8/64b lim: 9 exec/s: 39 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:13:05.622 [2024-10-14 17:30:02.592907] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:05.622 [2024-10-14 17:30:02.592938] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:05.622 #40 NEW cov: 11182 ft: 17790 corp: 9/73b lim: 9 exec/s: 40 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:13:05.880 [2024-10-14 17:30:02.790266] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:13:05.880 [2024-10-14 17:30:02.790297] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:13:05.880 #41 NEW cov: 11182 ft: 18097 corp: 10/82b lim: 9 exec/s: 20 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:13:05.880 #41 DONE cov: 11182 ft: 18097 corp: 10/82b lim: 9 exec/s: 20 rss: 76Mb 00:13:05.880 Done 41 runs in 2 second(s) 00:13:05.880 [2024-10-14 17:30:02.935239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:13:06.138 17:30:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:13:06.138 17:30:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:13:06.138 17:30:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:13:06.138 17:30:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:13:06.138 00:13:06.138 real 0m19.483s 00:13:06.138 user 0m26.920s 00:13:06.138 sys 0m1.963s 00:13:06.138 17:30:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.138 17:30:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:06.138 ************************************ 00:13:06.138 END TEST vfio_llvm_fuzz 00:13:06.138 ************************************ 00:13:06.138 00:13:06.138 real 1m23.685s 00:13:06.138 user 2m6.876s 00:13:06.138 sys 0m9.837s 00:13:06.138 17:30:03 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.138 17:30:03 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:06.138 ************************************ 00:13:06.138 END TEST llvm_fuzz 00:13:06.138 ************************************ 00:13:06.397 17:30:03 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:13:06.397 17:30:03 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:13:06.397 17:30:03 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:13:06.397 17:30:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.397 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.397 17:30:03 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:13:06.397 17:30:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:13:06.397 17:30:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:13:06.397 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.675 INFO: APP EXITING 00:13:11.675 INFO: killing all VMs 00:13:11.675 INFO: killing vhost app 00:13:11.675 INFO: EXIT DONE 00:13:14.213 Waiting for block devices as requested 00:13:14.213 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:13:14.213 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:14.213 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:14.213 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:14.213 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:14.473 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:14.473 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:14.473 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:14.732 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:14.732 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:14.732 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:14.991 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:14.991 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:14.991 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:15.251 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:15.251 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:15.251 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:18.543 Cleaning 00:13:18.543 Removing: /dev/shm/spdk_tgt_trace.pid2092755 00:13:18.543 Removing: /var/run/dpdk/spdk_pid2090418 00:13:18.543 Removing: /var/run/dpdk/spdk_pid2091551 00:13:18.543 Removing: /var/run/dpdk/spdk_pid2092755 00:13:18.543 Removing: /var/run/dpdk/spdk_pid2093257 00:13:18.543 Removing: /var/run/dpdk/spdk_pid2094016 00:13:18.543 Removing: /var/run/dpdk/spdk_pid2094041 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2094877 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2094959 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2095308 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2095544 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2095783 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2096031 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2096271 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2096467 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2096662 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2096894 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2097477 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2099989 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2100127 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2100325 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2100409 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2100796 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2100802 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2101359 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2101364 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2101580 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2101743 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2101866 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2101965 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2102403 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2102561 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2102714 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2102885 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2103464 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2103819 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2104178 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2104531 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2104837 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2105126 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2105444 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2105802 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2106159 00:13:18.544 Removing: /var/run/dpdk/spdk_pid2106521 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2106875 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2107239 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2107598 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2107892 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2108206 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2108514 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2108867 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2109226 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2109582 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2109935 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2110298 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2110652 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2110960 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2111229 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2111570 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2112174 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2112528 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2112896 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2113256 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2113611 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2113972 00:13:18.803 Removing: /var/run/dpdk/spdk_pid2114363 00:13:18.803 Clean 00:13:18.803 17:30:15 -- common/autotest_common.sh@1451 -- # return 0 00:13:18.803 17:30:15 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:13:18.803 17:30:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.803 17:30:15 -- common/autotest_common.sh@10 -- # set +x 00:13:18.803 17:30:15 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:13:18.803 17:30:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.803 17:30:15 -- common/autotest_common.sh@10 -- # set +x 00:13:19.063 17:30:15 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:13:19.063 17:30:15 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:13:19.063 17:30:15 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:13:19.063 17:30:15 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:13:19.063 17:30:15 -- spdk/autotest.sh@394 -- # hostname 00:13:19.063 17:30:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-49 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:13:19.322 geninfo: WARNING: invalid characters removed from testname! 00:13:24.598 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:13:28.809 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:13:31.346 17:30:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:39.469 17:30:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:44.761 17:30:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:50.035 17:30:46 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:13:55.308 17:30:52 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:14:00.582 17:30:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:14:05.862 17:31:02 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:14:05.862 17:31:02 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:14:05.862 17:31:02 -- common/autotest_common.sh@1691 -- $ lcov --version 00:14:05.862 17:31:02 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:14:05.862 17:31:02 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:14:05.862 17:31:02 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:14:05.862 17:31:02 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:14:05.862 17:31:02 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:14:05.862 17:31:02 -- scripts/common.sh@336 -- $ IFS=.-: 00:14:05.862 17:31:02 -- scripts/common.sh@336 -- $ read -ra ver1 00:14:05.862 17:31:02 -- scripts/common.sh@337 -- $ IFS=.-: 00:14:05.862 17:31:02 -- scripts/common.sh@337 -- $ read -ra ver2 00:14:05.862 17:31:02 -- scripts/common.sh@338 -- $ local 'op=<' 00:14:05.862 17:31:02 -- scripts/common.sh@340 -- $ ver1_l=2 00:14:05.862 17:31:02 -- scripts/common.sh@341 -- $ ver2_l=1 00:14:05.862 17:31:02 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:14:05.862 17:31:02 -- scripts/common.sh@344 -- $ case "$op" in 00:14:05.862 17:31:02 -- scripts/common.sh@345 -- $ : 1 00:14:05.863 17:31:02 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:14:05.863 17:31:02 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.863 17:31:02 -- scripts/common.sh@365 -- $ decimal 1 00:14:05.863 17:31:02 -- scripts/common.sh@353 -- $ local d=1 00:14:05.863 17:31:02 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:14:05.863 17:31:02 -- scripts/common.sh@355 -- $ echo 1 00:14:05.863 17:31:02 -- scripts/common.sh@365 -- $ ver1[v]=1 00:14:05.863 17:31:02 -- scripts/common.sh@366 -- $ decimal 2 00:14:05.863 17:31:02 -- scripts/common.sh@353 -- $ local d=2 00:14:05.863 17:31:02 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:14:05.863 17:31:02 -- scripts/common.sh@355 -- $ echo 2 00:14:05.863 17:31:02 -- scripts/common.sh@366 -- $ ver2[v]=2 00:14:05.863 17:31:02 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:14:05.863 17:31:02 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:14:05.863 17:31:02 -- scripts/common.sh@368 -- $ return 0 00:14:05.863 17:31:02 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.863 17:31:02 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:14:05.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.863 --rc genhtml_branch_coverage=1 00:14:05.863 --rc genhtml_function_coverage=1 00:14:05.863 --rc genhtml_legend=1 00:14:05.863 --rc geninfo_all_blocks=1 00:14:05.863 --rc geninfo_unexecuted_blocks=1 00:14:05.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:05.863 ' 00:14:05.863 17:31:02 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:14:05.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.863 --rc genhtml_branch_coverage=1 00:14:05.863 --rc genhtml_function_coverage=1 00:14:05.863 --rc genhtml_legend=1 00:14:05.863 --rc geninfo_all_blocks=1 00:14:05.863 --rc geninfo_unexecuted_blocks=1 00:14:05.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:05.863 ' 00:14:05.863 17:31:02 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:14:05.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.863 --rc genhtml_branch_coverage=1 00:14:05.863 --rc genhtml_function_coverage=1 00:14:05.863 --rc genhtml_legend=1 00:14:05.863 --rc geninfo_all_blocks=1 00:14:05.863 --rc geninfo_unexecuted_blocks=1 00:14:05.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:05.863 ' 00:14:05.863 17:31:02 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:14:05.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.863 --rc genhtml_branch_coverage=1 00:14:05.863 --rc genhtml_function_coverage=1 00:14:05.863 --rc genhtml_legend=1 00:14:05.863 --rc geninfo_all_blocks=1 00:14:05.863 --rc geninfo_unexecuted_blocks=1 00:14:05.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:05.863 ' 00:14:05.863 17:31:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:14:05.863 17:31:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:14:05.863 17:31:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:05.863 17:31:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.863 17:31:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.863 17:31:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.863 17:31:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.863 17:31:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.863 17:31:02 -- paths/export.sh@5 -- $ export PATH 00:14:05.863 17:31:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.863 17:31:02 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:14:05.863 17:31:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:14:05.863 17:31:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728919862.XXXXXX 00:14:05.863 17:31:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728919862.7FbFbW 00:14:05.863 17:31:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:14:05.863 17:31:02 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:14:05.863 17:31:02 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:14:05.863 17:31:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:14:05.863 17:31:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:14:05.863 17:31:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:14:05.863 17:31:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:14:05.863 17:31:02 -- common/autotest_common.sh@10 -- $ set +x 00:14:05.863 17:31:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:14:05.863 17:31:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:14:05.863 17:31:02 -- pm/common@17 -- $ local monitor 00:14:05.863 17:31:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:05.863 17:31:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:05.863 17:31:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:05.863 17:31:02 -- pm/common@21 -- $ date +%s 00:14:05.863 17:31:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:05.863 17:31:02 -- pm/common@21 -- $ date +%s 00:14:05.863 17:31:02 -- pm/common@25 -- $ sleep 1 00:14:05.863 17:31:02 -- pm/common@21 -- $ date +%s 00:14:05.863 17:31:02 -- pm/common@21 -- $ date +%s 00:14:05.863 17:31:02 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728919862 00:14:05.863 17:31:02 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728919862 00:14:05.863 17:31:02 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728919862 00:14:05.863 17:31:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728919862 00:14:05.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728919862_collect-cpu-load.pm.log 00:14:05.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728919862_collect-vmstat.pm.log 00:14:05.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728919862_collect-cpu-temp.pm.log 00:14:05.863 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728919862_collect-bmc-pm.bmc.pm.log 00:14:06.804 17:31:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:14:06.804 17:31:03 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:14:06.804 17:31:03 -- spdk/autopackage.sh@14 -- $ timing_finish 00:14:06.805 17:31:03 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:14:06.805 17:31:03 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:14:06.805 17:31:03 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:14:07.065 17:31:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:14:07.065 17:31:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:07.065 17:31:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:07.065 17:31:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:07.065 17:31:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:14:07.065 17:31:03 -- pm/common@44 -- $ pid=2120996 00:14:07.065 17:31:03 -- pm/common@50 -- $ kill -TERM 2120996 00:14:07.065 17:31:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:07.065 17:31:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:14:07.065 17:31:03 -- pm/common@44 -- $ pid=2120998 00:14:07.065 17:31:03 -- pm/common@50 -- $ kill -TERM 2120998 00:14:07.065 17:31:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:07.065 17:31:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:14:07.065 17:31:03 -- pm/common@44 -- $ pid=2121000 00:14:07.065 17:31:03 -- pm/common@50 -- $ kill -TERM 2121000 00:14:07.065 17:31:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:07.065 17:31:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:14:07.065 17:31:03 -- pm/common@44 -- $ pid=2121022 00:14:07.065 17:31:03 -- pm/common@50 -- $ sudo -E kill -TERM 2121022 00:14:07.065 + [[ -n 1992014 ]] 00:14:07.065 + sudo kill 1992014 00:14:07.075 [Pipeline] } 00:14:07.090 [Pipeline] // stage 00:14:07.096 [Pipeline] } 00:14:07.111 [Pipeline] // timeout 00:14:07.116 [Pipeline] } 00:14:07.131 [Pipeline] // catchError 00:14:07.137 [Pipeline] } 00:14:07.160 [Pipeline] // wrap 00:14:07.167 [Pipeline] } 00:14:07.180 [Pipeline] // catchError 00:14:07.191 [Pipeline] stage 00:14:07.194 [Pipeline] { (Epilogue) 00:14:07.207 [Pipeline] catchError 00:14:07.209 [Pipeline] { 00:14:07.221 [Pipeline] echo 00:14:07.223 Cleanup processes 00:14:07.229 [Pipeline] sh 00:14:07.518 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:07.518 2121139 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:14:07.518 2121394 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:07.532 [Pipeline] sh 00:14:07.820 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:07.820 ++ grep -v 'sudo pgrep' 00:14:07.820 ++ awk '{print $1}' 00:14:07.820 + sudo kill -9 2121139 00:14:07.832 [Pipeline] sh 00:14:08.123 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:14:20.355 [Pipeline] sh 00:14:20.700 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:14:20.700 Artifacts sizes are good 00:14:20.777 [Pipeline] archiveArtifacts 00:14:20.793 Archiving artifacts 00:14:20.932 [Pipeline] sh 00:14:21.296 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:14:21.312 [Pipeline] cleanWs 00:14:21.322 [WS-CLEANUP] Deleting project workspace... 00:14:21.322 [WS-CLEANUP] Deferred wipeout is used... 00:14:21.328 [WS-CLEANUP] done 00:14:21.331 [Pipeline] } 00:14:21.349 [Pipeline] // catchError 00:14:21.361 [Pipeline] sh 00:14:21.643 + logger -p user.info -t JENKINS-CI 00:14:21.651 [Pipeline] } 00:14:21.664 [Pipeline] // stage 00:14:21.668 [Pipeline] } 00:14:21.685 [Pipeline] // node 00:14:21.690 [Pipeline] End of Pipeline 00:14:21.726 Finished: SUCCESS