00:00:00.001 Started by upstream project "autotest-per-patch" build number 130840 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 25670 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.026 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.088 Using shallow fetch with depth 1 00:00:00.088 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.088 > git --version # timeout=10 00:00:00.129 > git --version # 'git version 2.39.2' 00:00:00.129 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/42/25142/4 # timeout=5 00:00:03.928 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.940 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.953 Checking out Revision 67cd2f1639a8077ee9fc0f9259e068d0e5b67761 (FETCH_HEAD) 00:00:03.953 > git config core.sparsecheckout # timeout=10 00:00:03.967 > git read-tree -mu HEAD # timeout=10 00:00:03.985 > git checkout -f 67cd2f1639a8077ee9fc0f9259e068d0e5b67761 # timeout=5 00:00:04.008 Commit message: "jenkins/jjb-config: Use dedicated image version for LTS builds" 00:00:04.008 > git rev-list --no-walk a2a0152a3dc15677a5c251d111224a14844e26b9 # timeout=10 00:00:04.114 [Pipeline] Start of Pipeline 00:00:04.127 [Pipeline] library 00:00:04.129 Loading library shm_lib@master 00:00:04.129 Library shm_lib@master is cached. Copying from home. 00:00:04.144 [Pipeline] node 00:00:04.164 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.166 [Pipeline] { 00:00:04.176 [Pipeline] } 00:00:04.189 [Pipeline] // node 00:00:04.198 [Pipeline] node 00:00:04.204 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.206 [Pipeline] { 00:00:04.215 [Pipeline] catchError 00:00:04.216 [Pipeline] { 00:00:04.227 [Pipeline] wrap 00:00:04.235 [Pipeline] { 00:00:04.240 [Pipeline] stage 00:00:04.241 [Pipeline] { (Prologue) 00:00:04.431 [Pipeline] sh 00:00:04.718 + logger -p user.info -t JENKINS-CI 00:00:04.738 [Pipeline] echo 00:00:04.740 Node: WFP39 00:00:04.747 [Pipeline] sh 00:00:05.043 [Pipeline] setCustomBuildProperty 00:00:05.054 [Pipeline] echo 00:00:05.055 Cleanup processes 00:00:05.060 [Pipeline] sh 00:00:05.342 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.342 364408 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.356 [Pipeline] sh 00:00:05.643 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.643 ++ grep -v 'sudo pgrep' 00:00:05.643 ++ awk '{print $1}' 00:00:05.643 + sudo kill -9 00:00:05.643 + true 00:00:05.656 [Pipeline] cleanWs 00:00:05.665 [WS-CLEANUP] Deleting project workspace... 00:00:05.665 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.672 [WS-CLEANUP] done 00:00:05.676 [Pipeline] setCustomBuildProperty 00:00:05.693 [Pipeline] sh 00:00:05.979 + sudo git config --global --replace-all safe.directory '*' 00:00:06.071 [Pipeline] httpRequest 00:00:06.506 [Pipeline] echo 00:00:06.508 Sorcerer 10.211.164.101 is alive 00:00:06.518 [Pipeline] retry 00:00:06.519 [Pipeline] { 00:00:06.531 [Pipeline] httpRequest 00:00:06.536 HttpMethod: GET 00:00:06.536 URL: http://10.211.164.101/packages/jbp_67cd2f1639a8077ee9fc0f9259e068d0e5b67761.tar.gz 00:00:06.537 Sending request to url: http://10.211.164.101/packages/jbp_67cd2f1639a8077ee9fc0f9259e068d0e5b67761.tar.gz 00:00:06.577 Response Code: HTTP/1.1 200 OK 00:00:06.577 Success: Status code 200 is in the accepted range: 200,404 00:00:06.577 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_67cd2f1639a8077ee9fc0f9259e068d0e5b67761.tar.gz 00:00:19.206 [Pipeline] } 00:00:19.226 [Pipeline] // retry 00:00:19.235 [Pipeline] sh 00:00:19.523 + tar --no-same-owner -xf jbp_67cd2f1639a8077ee9fc0f9259e068d0e5b67761.tar.gz 00:00:19.541 [Pipeline] httpRequest 00:00:19.946 [Pipeline] echo 00:00:19.948 Sorcerer 10.211.164.101 is alive 00:00:19.959 [Pipeline] retry 00:00:19.962 [Pipeline] { 00:00:19.978 [Pipeline] httpRequest 00:00:19.983 HttpMethod: GET 00:00:19.983 URL: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:19.984 Sending request to url: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:20.003 Response Code: HTTP/1.1 200 OK 00:00:20.004 Success: Status code 200 is in the accepted range: 200,404 00:00:20.004 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:58.493 [Pipeline] } 00:00:58.512 [Pipeline] // retry 00:00:58.520 [Pipeline] sh 00:00:58.808 + tar --no-same-owner -xf spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:01:01.364 [Pipeline] sh 00:01:01.651 + git -C spdk log --oneline -n5 00:01:01.651 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:01:01.651 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:01:01.651 82c46626a lib/event: implement scheduler trace events 00:01:01.651 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:01:01.651 1876d41a3 include/spdk_internal: define scheduler tracegroup and tracepoints 00:01:01.662 [Pipeline] } 00:01:01.677 [Pipeline] // stage 00:01:01.686 [Pipeline] stage 00:01:01.689 [Pipeline] { (Prepare) 00:01:01.707 [Pipeline] writeFile 00:01:01.724 [Pipeline] sh 00:01:02.011 + logger -p user.info -t JENKINS-CI 00:01:02.024 [Pipeline] sh 00:01:02.310 + logger -p user.info -t JENKINS-CI 00:01:02.323 [Pipeline] sh 00:01:02.608 + cat autorun-spdk.conf 00:01:02.608 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.608 SPDK_TEST_FUZZER_SHORT=1 00:01:02.608 SPDK_TEST_FUZZER=1 00:01:02.608 SPDK_TEST_SETUP=1 00:01:02.608 SPDK_RUN_UBSAN=1 00:01:02.616 RUN_NIGHTLY=0 00:01:02.621 [Pipeline] readFile 00:01:02.645 [Pipeline] withEnv 00:01:02.647 [Pipeline] { 00:01:02.660 [Pipeline] sh 00:01:02.947 + set -ex 00:01:02.947 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:02.947 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:02.947 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.947 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:02.947 ++ SPDK_TEST_FUZZER=1 00:01:02.947 ++ SPDK_TEST_SETUP=1 00:01:02.947 ++ SPDK_RUN_UBSAN=1 00:01:02.947 ++ RUN_NIGHTLY=0 00:01:02.947 + case $SPDK_TEST_NVMF_NICS in 00:01:02.947 + DRIVERS= 00:01:02.947 + [[ -n '' ]] 00:01:02.947 + exit 0 00:01:02.960 [Pipeline] } 00:01:02.996 [Pipeline] // withEnv 00:01:02.999 [Pipeline] } 00:01:03.008 [Pipeline] // stage 00:01:03.014 [Pipeline] catchError 00:01:03.015 [Pipeline] { 00:01:03.023 [Pipeline] timeout 00:01:03.023 Timeout set to expire in 30 min 00:01:03.024 [Pipeline] { 00:01:03.034 [Pipeline] stage 00:01:03.035 [Pipeline] { (Tests) 00:01:03.047 [Pipeline] sh 00:01:03.329 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:03.329 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:03.329 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:03.329 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:03.329 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:03.329 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:03.329 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:03.329 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:03.329 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:03.329 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:03.329 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:03.329 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:03.329 + source /etc/os-release 00:01:03.329 ++ NAME='Fedora Linux' 00:01:03.329 ++ VERSION='39 (Cloud Edition)' 00:01:03.329 ++ ID=fedora 00:01:03.329 ++ VERSION_ID=39 00:01:03.329 ++ VERSION_CODENAME= 00:01:03.329 ++ PLATFORM_ID=platform:f39 00:01:03.329 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:03.329 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:03.329 ++ LOGO=fedora-logo-icon 00:01:03.329 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:03.329 ++ HOME_URL=https://fedoraproject.org/ 00:01:03.329 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:03.329 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:03.329 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:03.329 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:03.329 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:03.329 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:03.329 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:03.329 ++ SUPPORT_END=2024-11-12 00:01:03.329 ++ VARIANT='Cloud Edition' 00:01:03.329 ++ VARIANT_ID=cloud 00:01:03.329 + uname -a 00:01:03.329 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:03.329 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:06.624 Hugepages 00:01:06.624 node hugesize free / total 00:01:06.624 node0 1048576kB 0 / 0 00:01:06.624 node0 2048kB 0 / 0 00:01:06.624 node1 1048576kB 0 / 0 00:01:06.624 node1 2048kB 0 / 0 00:01:06.624 00:01:06.624 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.624 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:06.624 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:06.624 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:06.624 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:06.624 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:06.624 + rm -f /tmp/spdk-ld-path 00:01:06.624 + source autorun-spdk.conf 00:01:06.624 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.624 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:06.624 ++ SPDK_TEST_FUZZER=1 00:01:06.624 ++ SPDK_TEST_SETUP=1 00:01:06.624 ++ SPDK_RUN_UBSAN=1 00:01:06.624 ++ RUN_NIGHTLY=0 00:01:06.624 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.624 + [[ -n '' ]] 00:01:06.624 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:06.624 + for M in /var/spdk/build-*-manifest.txt 00:01:06.624 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:06.624 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:06.624 + for M in /var/spdk/build-*-manifest.txt 00:01:06.624 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.624 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:06.624 + for M in /var/spdk/build-*-manifest.txt 00:01:06.624 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.624 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:06.624 ++ uname 00:01:06.624 + [[ Linux == \L\i\n\u\x ]] 00:01:06.624 + sudo dmesg -T 00:01:06.624 + sudo dmesg --clear 00:01:06.624 + dmesg_pid=365351 00:01:06.624 + [[ Fedora Linux == FreeBSD ]] 00:01:06.624 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.624 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.624 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.624 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.624 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.624 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.624 + sudo dmesg -Tw 00:01:06.624 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.624 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.624 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.624 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.624 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.624 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.624 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.624 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.624 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:06.624 Test configuration: 00:01:06.624 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.624 SPDK_TEST_FUZZER_SHORT=1 00:01:06.624 SPDK_TEST_FUZZER=1 00:01:06.624 SPDK_TEST_SETUP=1 00:01:06.624 SPDK_RUN_UBSAN=1 00:01:06.624 RUN_NIGHTLY=0 09:20:02 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:06.624 09:20:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:06.624 09:20:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:06.624 09:20:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.624 09:20:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.624 09:20:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.624 09:20:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.624 09:20:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.624 09:20:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.624 09:20:02 -- paths/export.sh@5 -- $ export PATH 00:01:06.624 09:20:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.624 09:20:02 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:06.624 09:20:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:06.624 09:20:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728285602.XXXXXX 00:01:06.624 09:20:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728285602.Q5ZILh 00:01:06.624 09:20:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:06.624 09:20:02 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:06.624 09:20:02 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:06.624 09:20:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.624 09:20:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.624 09:20:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:06.624 09:20:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:06.624 09:20:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.624 09:20:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:06.624 09:20:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:06.624 09:20:02 -- pm/common@17 -- $ local monitor 00:01:06.624 09:20:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.624 09:20:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.624 09:20:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.624 09:20:02 -- pm/common@21 -- $ date +%s 00:01:06.624 09:20:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.624 09:20:02 -- pm/common@21 -- $ date +%s 00:01:06.624 09:20:02 -- pm/common@25 -- $ sleep 1 00:01:06.624 09:20:02 -- pm/common@21 -- $ date +%s 00:01:06.624 09:20:02 -- pm/common@21 -- $ date +%s 00:01:06.624 09:20:02 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285602 00:01:06.624 09:20:02 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285602 00:01:06.624 09:20:02 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285602 00:01:06.624 09:20:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728285602 00:01:06.624 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285602_collect-vmstat.pm.log 00:01:06.624 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285602_collect-cpu-load.pm.log 00:01:06.624 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285602_collect-cpu-temp.pm.log 00:01:06.625 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728285602_collect-bmc-pm.bmc.pm.log 00:01:07.562 09:20:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:07.562 09:20:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.562 09:20:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.562 09:20:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:07.562 09:20:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.562 Mon Oct 7 07:20:03 AM UTC 2024 00:01:07.562 09:20:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.562 v25.01-pre-35-g3950cd1bb 00:01:07.562 09:20:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.562 09:20:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.562 09:20:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.562 09:20:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:07.562 09:20:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:07.562 09:20:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.820 ************************************ 00:01:07.820 START TEST ubsan 00:01:07.820 ************************************ 00:01:07.820 09:20:03 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:07.820 using ubsan 00:01:07.820 00:01:07.820 real 0m0.001s 00:01:07.820 user 0m0.001s 00:01:07.820 sys 0m0.000s 00:01:07.820 09:20:03 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:07.820 09:20:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.820 ************************************ 00:01:07.820 END TEST ubsan 00:01:07.820 ************************************ 00:01:07.820 09:20:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.820 09:20:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.820 09:20:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.820 09:20:03 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:07.820 09:20:03 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:07.820 09:20:03 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:07.820 09:20:03 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:07.820 09:20:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:07.820 09:20:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.820 ************************************ 00:01:07.820 START TEST autobuild_llvm_precompile 00:01:07.820 ************************************ 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:07.820 Target: x86_64-redhat-linux-gnu 00:01:07.820 Thread model: posix 00:01:07.820 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:07.820 09:20:03 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:08.078 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:08.078 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:08.646 Using 'verbs' RDMA provider 00:01:24.599 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.814 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.814 Creating mk/config.mk...done. 00:01:36.814 Creating mk/cc.flags.mk...done. 00:01:36.814 Type 'make' to build. 00:01:36.814 00:01:36.814 real 0m29.116s 00:01:36.814 user 0m13.045s 00:01:36.814 sys 0m15.406s 00:01:36.814 09:20:32 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:36.814 09:20:32 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:36.814 ************************************ 00:01:36.814 END TEST autobuild_llvm_precompile 00:01:36.814 ************************************ 00:01:37.073 09:20:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:37.073 09:20:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:37.073 09:20:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:37.073 09:20:32 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:37.073 09:20:32 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:37.333 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:37.333 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:37.592 Using 'verbs' RDMA provider 00:01:50.743 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:00.763 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:01.329 Creating mk/config.mk...done. 00:02:01.329 Creating mk/cc.flags.mk...done. 00:02:01.329 Type 'make' to build. 00:02:01.329 09:20:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:02:01.329 09:20:56 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:01.329 09:20:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.329 09:20:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.329 ************************************ 00:02:01.329 START TEST make 00:02:01.329 ************************************ 00:02:01.329 09:20:56 make -- common/autotest_common.sh@1125 -- $ make -j72 00:02:01.588 make[1]: Nothing to be done for 'all'. 00:02:03.506 The Meson build system 00:02:03.506 Version: 1.5.0 00:02:03.506 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:03.506 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:03.506 Build type: native build 00:02:03.506 Project name: libvfio-user 00:02:03.506 Project version: 0.0.1 00:02:03.506 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:03.506 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:03.506 Host machine cpu family: x86_64 00:02:03.506 Host machine cpu: x86_64 00:02:03.506 Run-time dependency threads found: YES 00:02:03.506 Library dl found: YES 00:02:03.506 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.506 Run-time dependency json-c found: YES 0.17 00:02:03.507 Run-time dependency cmocka found: YES 1.1.7 00:02:03.507 Program pytest-3 found: NO 00:02:03.507 Program flake8 found: NO 00:02:03.507 Program misspell-fixer found: NO 00:02:03.507 Program restructuredtext-lint found: NO 00:02:03.507 Program valgrind found: YES (/usr/bin/valgrind) 00:02:03.507 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.507 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.507 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.507 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:03.507 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:03.507 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:03.507 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:03.507 Build targets in project: 8 00:02:03.507 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:03.507 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:03.507 00:02:03.507 libvfio-user 0.0.1 00:02:03.507 00:02:03.507 User defined options 00:02:03.507 buildtype : debug 00:02:03.507 default_library: static 00:02:03.507 libdir : /usr/local/lib 00:02:03.507 00:02:03.507 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.507 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:03.507 [1/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:03.507 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:03.507 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:03.507 [4/36] Compiling C object samples/null.p/null.c.o 00:02:03.507 [5/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:03.507 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:03.507 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:03.507 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:03.507 [9/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:03.507 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:03.507 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:03.766 [12/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:03.766 [13/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:03.766 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:03.766 [15/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:03.766 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:03.766 [17/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:03.766 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:03.766 [19/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:03.766 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:03.766 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:03.766 [22/36] Compiling C object samples/server.p/server.c.o 00:02:03.766 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:03.766 [24/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:03.766 [25/36] Compiling C object samples/client.p/client.c.o 00:02:03.766 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:03.766 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:03.766 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:03.766 [29/36] Linking target samples/client 00:02:03.766 [30/36] Linking static target lib/libvfio-user.a 00:02:03.766 [31/36] Linking target test/unit_tests 00:02:03.766 [32/36] Linking target samples/lspci 00:02:03.766 [33/36] Linking target samples/gpio-pci-idio-16 00:02:03.766 [34/36] Linking target samples/null 00:02:03.766 [35/36] Linking target samples/server 00:02:03.766 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:03.766 INFO: autodetecting backend as ninja 00:02:03.766 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:03.766 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.331 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:04.331 ninja: no work to do. 00:02:10.906 The Meson build system 00:02:10.906 Version: 1.5.0 00:02:10.906 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:10.906 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:10.906 Build type: native build 00:02:10.906 Program cat found: YES (/usr/bin/cat) 00:02:10.906 Project name: DPDK 00:02:10.906 Project version: 24.03.0 00:02:10.906 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:10.906 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:10.906 Host machine cpu family: x86_64 00:02:10.906 Host machine cpu: x86_64 00:02:10.906 Message: ## Building in Developer Mode ## 00:02:10.906 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.906 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:10.906 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.906 Program python3 found: YES (/usr/bin/python3) 00:02:10.906 Program cat found: YES (/usr/bin/cat) 00:02:10.906 Compiler for C supports arguments -march=native: YES 00:02:10.906 Checking for size of "void *" : 8 00:02:10.906 Checking for size of "void *" : 8 (cached) 00:02:10.906 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:10.906 Library m found: YES 00:02:10.906 Library numa found: YES 00:02:10.906 Has header "numaif.h" : YES 00:02:10.906 Library fdt found: NO 00:02:10.906 Library execinfo found: NO 00:02:10.906 Has header "execinfo.h" : YES 00:02:10.906 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.906 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.906 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.906 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.906 Run-time dependency openssl found: YES 3.1.1 00:02:10.906 Run-time dependency libpcap found: YES 1.10.4 00:02:10.906 Has header "pcap.h" with dependency libpcap: YES 00:02:10.906 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.906 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.906 Compiler for C supports arguments -Wformat: YES 00:02:10.906 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:10.906 Compiler for C supports arguments -Wformat-security: YES 00:02:10.906 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.906 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.906 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.906 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.906 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.906 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.906 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.906 Compiler for C supports arguments -Wundef: YES 00:02:10.906 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.906 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.906 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:10.906 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.906 Program objdump found: YES (/usr/bin/objdump) 00:02:10.906 Compiler for C supports arguments -mavx512f: YES 00:02:10.906 Checking if "AVX512 checking" compiles: YES 00:02:10.906 Fetching value of define "__SSE4_2__" : 1 00:02:10.906 Fetching value of define "__AES__" : 1 00:02:10.906 Fetching value of define "__AVX__" : 1 00:02:10.906 Fetching value of define "__AVX2__" : 1 00:02:10.906 Fetching value of define "__AVX512BW__" : 1 00:02:10.906 Fetching value of define "__AVX512CD__" : 1 00:02:10.906 Fetching value of define "__AVX512DQ__" : 1 00:02:10.906 Fetching value of define "__AVX512F__" : 1 00:02:10.906 Fetching value of define "__AVX512VL__" : 1 00:02:10.906 Fetching value of define "__PCLMUL__" : 1 00:02:10.906 Fetching value of define "__RDRND__" : 1 00:02:10.906 Fetching value of define "__RDSEED__" : 1 00:02:10.906 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.906 Fetching value of define "__znver1__" : (undefined) 00:02:10.906 Fetching value of define "__znver2__" : (undefined) 00:02:10.906 Fetching value of define "__znver3__" : (undefined) 00:02:10.906 Fetching value of define "__znver4__" : (undefined) 00:02:10.906 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:10.906 Message: lib/log: Defining dependency "log" 00:02:10.906 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.906 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.906 Checking for function "getentropy" : NO 00:02:10.906 Message: lib/eal: Defining dependency "eal" 00:02:10.906 Message: lib/ring: Defining dependency "ring" 00:02:10.906 Message: lib/rcu: Defining dependency "rcu" 00:02:10.907 Message: lib/mempool: Defining dependency "mempool" 00:02:10.907 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.907 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.907 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:10.907 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:10.907 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:10.907 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:10.907 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:10.907 Compiler for C supports arguments -mpclmul: YES 00:02:10.907 Compiler for C supports arguments -maes: YES 00:02:10.907 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.907 Compiler for C supports arguments -mavx512bw: YES 00:02:10.907 Compiler for C supports arguments -mavx512dq: YES 00:02:10.907 Compiler for C supports arguments -mavx512vl: YES 00:02:10.907 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.907 Compiler for C supports arguments -mavx2: YES 00:02:10.907 Compiler for C supports arguments -mavx: YES 00:02:10.907 Message: lib/net: Defining dependency "net" 00:02:10.907 Message: lib/meter: Defining dependency "meter" 00:02:10.907 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.907 Message: lib/pci: Defining dependency "pci" 00:02:10.907 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.907 Message: lib/hash: Defining dependency "hash" 00:02:10.907 Message: lib/timer: Defining dependency "timer" 00:02:10.907 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.907 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.907 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.907 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.907 Message: lib/power: Defining dependency "power" 00:02:10.907 Message: lib/reorder: Defining dependency "reorder" 00:02:10.907 Message: lib/security: Defining dependency "security" 00:02:10.907 Has header "linux/userfaultfd.h" : YES 00:02:10.907 Has header "linux/vduse.h" : YES 00:02:10.907 Message: lib/vhost: Defining dependency "vhost" 00:02:10.907 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:10.907 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:10.907 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:10.907 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:10.907 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:10.907 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:10.907 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:10.907 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:10.907 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:10.907 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:10.907 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:10.907 Configuring doxy-api-html.conf using configuration 00:02:10.907 Configuring doxy-api-man.conf using configuration 00:02:10.907 Program mandb found: YES (/usr/bin/mandb) 00:02:10.907 Program sphinx-build found: NO 00:02:10.907 Configuring rte_build_config.h using configuration 00:02:10.907 Message: 00:02:10.907 ================= 00:02:10.907 Applications Enabled 00:02:10.907 ================= 00:02:10.907 00:02:10.907 apps: 00:02:10.907 00:02:10.907 00:02:10.907 Message: 00:02:10.907 ================= 00:02:10.907 Libraries Enabled 00:02:10.907 ================= 00:02:10.907 00:02:10.907 libs: 00:02:10.907 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:10.907 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:10.907 cryptodev, dmadev, power, reorder, security, vhost, 00:02:10.907 00:02:10.907 Message: 00:02:10.907 =============== 00:02:10.907 Drivers Enabled 00:02:10.907 =============== 00:02:10.907 00:02:10.907 common: 00:02:10.907 00:02:10.907 bus: 00:02:10.907 pci, vdev, 00:02:10.907 mempool: 00:02:10.907 ring, 00:02:10.907 dma: 00:02:10.907 00:02:10.907 net: 00:02:10.907 00:02:10.907 crypto: 00:02:10.907 00:02:10.907 compress: 00:02:10.907 00:02:10.907 vdpa: 00:02:10.907 00:02:10.907 00:02:10.907 Message: 00:02:10.907 ================= 00:02:10.907 Content Skipped 00:02:10.907 ================= 00:02:10.907 00:02:10.907 apps: 00:02:10.907 dumpcap: explicitly disabled via build config 00:02:10.907 graph: explicitly disabled via build config 00:02:10.907 pdump: explicitly disabled via build config 00:02:10.907 proc-info: explicitly disabled via build config 00:02:10.907 test-acl: explicitly disabled via build config 00:02:10.907 test-bbdev: explicitly disabled via build config 00:02:10.907 test-cmdline: explicitly disabled via build config 00:02:10.907 test-compress-perf: explicitly disabled via build config 00:02:10.907 test-crypto-perf: explicitly disabled via build config 00:02:10.907 test-dma-perf: explicitly disabled via build config 00:02:10.907 test-eventdev: explicitly disabled via build config 00:02:10.907 test-fib: explicitly disabled via build config 00:02:10.907 test-flow-perf: explicitly disabled via build config 00:02:10.907 test-gpudev: explicitly disabled via build config 00:02:10.907 test-mldev: explicitly disabled via build config 00:02:10.907 test-pipeline: explicitly disabled via build config 00:02:10.907 test-pmd: explicitly disabled via build config 00:02:10.907 test-regex: explicitly disabled via build config 00:02:10.907 test-sad: explicitly disabled via build config 00:02:10.907 test-security-perf: explicitly disabled via build config 00:02:10.907 00:02:10.907 libs: 00:02:10.907 argparse: explicitly disabled via build config 00:02:10.907 metrics: explicitly disabled via build config 00:02:10.907 acl: explicitly disabled via build config 00:02:10.907 bbdev: explicitly disabled via build config 00:02:10.907 bitratestats: explicitly disabled via build config 00:02:10.907 bpf: explicitly disabled via build config 00:02:10.907 cfgfile: explicitly disabled via build config 00:02:10.907 distributor: explicitly disabled via build config 00:02:10.907 efd: explicitly disabled via build config 00:02:10.907 eventdev: explicitly disabled via build config 00:02:10.907 dispatcher: explicitly disabled via build config 00:02:10.907 gpudev: explicitly disabled via build config 00:02:10.907 gro: explicitly disabled via build config 00:02:10.907 gso: explicitly disabled via build config 00:02:10.907 ip_frag: explicitly disabled via build config 00:02:10.907 jobstats: explicitly disabled via build config 00:02:10.907 latencystats: explicitly disabled via build config 00:02:10.907 lpm: explicitly disabled via build config 00:02:10.907 member: explicitly disabled via build config 00:02:10.907 pcapng: explicitly disabled via build config 00:02:10.907 rawdev: explicitly disabled via build config 00:02:10.907 regexdev: explicitly disabled via build config 00:02:10.907 mldev: explicitly disabled via build config 00:02:10.907 rib: explicitly disabled via build config 00:02:10.907 sched: explicitly disabled via build config 00:02:10.907 stack: explicitly disabled via build config 00:02:10.907 ipsec: explicitly disabled via build config 00:02:10.907 pdcp: explicitly disabled via build config 00:02:10.907 fib: explicitly disabled via build config 00:02:10.907 port: explicitly disabled via build config 00:02:10.907 pdump: explicitly disabled via build config 00:02:10.907 table: explicitly disabled via build config 00:02:10.907 pipeline: explicitly disabled via build config 00:02:10.907 graph: explicitly disabled via build config 00:02:10.907 node: explicitly disabled via build config 00:02:10.907 00:02:10.907 drivers: 00:02:10.907 common/cpt: not in enabled drivers build config 00:02:10.907 common/dpaax: not in enabled drivers build config 00:02:10.907 common/iavf: not in enabled drivers build config 00:02:10.907 common/idpf: not in enabled drivers build config 00:02:10.907 common/ionic: not in enabled drivers build config 00:02:10.907 common/mvep: not in enabled drivers build config 00:02:10.907 common/octeontx: not in enabled drivers build config 00:02:10.907 bus/auxiliary: not in enabled drivers build config 00:02:10.907 bus/cdx: not in enabled drivers build config 00:02:10.907 bus/dpaa: not in enabled drivers build config 00:02:10.907 bus/fslmc: not in enabled drivers build config 00:02:10.907 bus/ifpga: not in enabled drivers build config 00:02:10.907 bus/platform: not in enabled drivers build config 00:02:10.907 bus/uacce: not in enabled drivers build config 00:02:10.907 bus/vmbus: not in enabled drivers build config 00:02:10.907 common/cnxk: not in enabled drivers build config 00:02:10.907 common/mlx5: not in enabled drivers build config 00:02:10.907 common/nfp: not in enabled drivers build config 00:02:10.907 common/nitrox: not in enabled drivers build config 00:02:10.907 common/qat: not in enabled drivers build config 00:02:10.907 common/sfc_efx: not in enabled drivers build config 00:02:10.907 mempool/bucket: not in enabled drivers build config 00:02:10.907 mempool/cnxk: not in enabled drivers build config 00:02:10.907 mempool/dpaa: not in enabled drivers build config 00:02:10.907 mempool/dpaa2: not in enabled drivers build config 00:02:10.907 mempool/octeontx: not in enabled drivers build config 00:02:10.907 mempool/stack: not in enabled drivers build config 00:02:10.907 dma/cnxk: not in enabled drivers build config 00:02:10.907 dma/dpaa: not in enabled drivers build config 00:02:10.907 dma/dpaa2: not in enabled drivers build config 00:02:10.907 dma/hisilicon: not in enabled drivers build config 00:02:10.907 dma/idxd: not in enabled drivers build config 00:02:10.907 dma/ioat: not in enabled drivers build config 00:02:10.907 dma/skeleton: not in enabled drivers build config 00:02:10.907 net/af_packet: not in enabled drivers build config 00:02:10.907 net/af_xdp: not in enabled drivers build config 00:02:10.907 net/ark: not in enabled drivers build config 00:02:10.907 net/atlantic: not in enabled drivers build config 00:02:10.907 net/avp: not in enabled drivers build config 00:02:10.907 net/axgbe: not in enabled drivers build config 00:02:10.907 net/bnx2x: not in enabled drivers build config 00:02:10.907 net/bnxt: not in enabled drivers build config 00:02:10.907 net/bonding: not in enabled drivers build config 00:02:10.907 net/cnxk: not in enabled drivers build config 00:02:10.907 net/cpfl: not in enabled drivers build config 00:02:10.907 net/cxgbe: not in enabled drivers build config 00:02:10.908 net/dpaa: not in enabled drivers build config 00:02:10.908 net/dpaa2: not in enabled drivers build config 00:02:10.908 net/e1000: not in enabled drivers build config 00:02:10.908 net/ena: not in enabled drivers build config 00:02:10.908 net/enetc: not in enabled drivers build config 00:02:10.908 net/enetfec: not in enabled drivers build config 00:02:10.908 net/enic: not in enabled drivers build config 00:02:10.908 net/failsafe: not in enabled drivers build config 00:02:10.908 net/fm10k: not in enabled drivers build config 00:02:10.908 net/gve: not in enabled drivers build config 00:02:10.908 net/hinic: not in enabled drivers build config 00:02:10.908 net/hns3: not in enabled drivers build config 00:02:10.908 net/i40e: not in enabled drivers build config 00:02:10.908 net/iavf: not in enabled drivers build config 00:02:10.908 net/ice: not in enabled drivers build config 00:02:10.908 net/idpf: not in enabled drivers build config 00:02:10.908 net/igc: not in enabled drivers build config 00:02:10.908 net/ionic: not in enabled drivers build config 00:02:10.908 net/ipn3ke: not in enabled drivers build config 00:02:10.908 net/ixgbe: not in enabled drivers build config 00:02:10.908 net/mana: not in enabled drivers build config 00:02:10.908 net/memif: not in enabled drivers build config 00:02:10.908 net/mlx4: not in enabled drivers build config 00:02:10.908 net/mlx5: not in enabled drivers build config 00:02:10.908 net/mvneta: not in enabled drivers build config 00:02:10.908 net/mvpp2: not in enabled drivers build config 00:02:10.908 net/netvsc: not in enabled drivers build config 00:02:10.908 net/nfb: not in enabled drivers build config 00:02:10.908 net/nfp: not in enabled drivers build config 00:02:10.908 net/ngbe: not in enabled drivers build config 00:02:10.908 net/null: not in enabled drivers build config 00:02:10.908 net/octeontx: not in enabled drivers build config 00:02:10.908 net/octeon_ep: not in enabled drivers build config 00:02:10.908 net/pcap: not in enabled drivers build config 00:02:10.908 net/pfe: not in enabled drivers build config 00:02:10.908 net/qede: not in enabled drivers build config 00:02:10.908 net/ring: not in enabled drivers build config 00:02:10.908 net/sfc: not in enabled drivers build config 00:02:10.908 net/softnic: not in enabled drivers build config 00:02:10.908 net/tap: not in enabled drivers build config 00:02:10.908 net/thunderx: not in enabled drivers build config 00:02:10.908 net/txgbe: not in enabled drivers build config 00:02:10.908 net/vdev_netvsc: not in enabled drivers build config 00:02:10.908 net/vhost: not in enabled drivers build config 00:02:10.908 net/virtio: not in enabled drivers build config 00:02:10.908 net/vmxnet3: not in enabled drivers build config 00:02:10.908 raw/*: missing internal dependency, "rawdev" 00:02:10.908 crypto/armv8: not in enabled drivers build config 00:02:10.908 crypto/bcmfs: not in enabled drivers build config 00:02:10.908 crypto/caam_jr: not in enabled drivers build config 00:02:10.908 crypto/ccp: not in enabled drivers build config 00:02:10.908 crypto/cnxk: not in enabled drivers build config 00:02:10.908 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.908 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.908 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.908 crypto/mlx5: not in enabled drivers build config 00:02:10.908 crypto/mvsam: not in enabled drivers build config 00:02:10.908 crypto/nitrox: not in enabled drivers build config 00:02:10.908 crypto/null: not in enabled drivers build config 00:02:10.908 crypto/octeontx: not in enabled drivers build config 00:02:10.908 crypto/openssl: not in enabled drivers build config 00:02:10.908 crypto/scheduler: not in enabled drivers build config 00:02:10.908 crypto/uadk: not in enabled drivers build config 00:02:10.908 crypto/virtio: not in enabled drivers build config 00:02:10.908 compress/isal: not in enabled drivers build config 00:02:10.908 compress/mlx5: not in enabled drivers build config 00:02:10.908 compress/nitrox: not in enabled drivers build config 00:02:10.908 compress/octeontx: not in enabled drivers build config 00:02:10.908 compress/zlib: not in enabled drivers build config 00:02:10.908 regex/*: missing internal dependency, "regexdev" 00:02:10.908 ml/*: missing internal dependency, "mldev" 00:02:10.908 vdpa/ifc: not in enabled drivers build config 00:02:10.908 vdpa/mlx5: not in enabled drivers build config 00:02:10.908 vdpa/nfp: not in enabled drivers build config 00:02:10.908 vdpa/sfc: not in enabled drivers build config 00:02:10.908 event/*: missing internal dependency, "eventdev" 00:02:10.908 baseband/*: missing internal dependency, "bbdev" 00:02:10.908 gpu/*: missing internal dependency, "gpudev" 00:02:10.908 00:02:10.908 00:02:10.908 Build targets in project: 85 00:02:10.908 00:02:10.908 DPDK 24.03.0 00:02:10.908 00:02:10.908 User defined options 00:02:10.908 buildtype : debug 00:02:10.908 default_library : static 00:02:10.908 libdir : lib 00:02:10.908 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:10.908 c_args : -fPIC -Werror 00:02:10.908 c_link_args : 00:02:10.908 cpu_instruction_set: native 00:02:10.908 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:10.908 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:10.908 enable_docs : false 00:02:10.908 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:10.908 enable_kmods : false 00:02:10.908 max_lcores : 128 00:02:10.908 tests : false 00:02:10.908 00:02:10.908 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.908 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:10.908 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.908 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:10.908 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.908 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.908 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.908 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.908 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.908 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.908 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.908 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.908 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:10.908 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.908 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.908 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.908 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.908 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:10.908 [17/268] Linking static target lib/librte_kvargs.a 00:02:10.908 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.908 [19/268] Linking static target lib/librte_log.a 00:02:11.167 [20/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.167 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.167 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.167 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.167 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.167 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.167 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.167 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.167 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.167 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.167 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.167 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.167 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.167 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.167 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.167 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.167 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.167 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.167 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.167 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.167 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.167 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.167 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.167 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.167 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.167 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.167 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.167 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.167 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.167 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.167 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.167 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.167 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.167 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.167 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.167 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.167 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.167 [57/268] Linking static target lib/librte_telemetry.a 00:02:11.167 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.167 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.167 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.167 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.426 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.426 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.426 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.426 [65/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.426 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.426 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.426 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.426 [69/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.426 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:11.426 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.426 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.426 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.426 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.426 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.426 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:11.426 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.426 [78/268] Linking static target lib/librte_ring.a 00:02:11.426 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.426 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:11.426 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:11.426 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.426 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.426 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.426 [85/268] Linking static target lib/librte_pci.a 00:02:11.426 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.426 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.426 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.426 [89/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:11.426 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.426 [91/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.426 [92/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.426 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.426 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.426 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.426 [96/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.426 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.426 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.426 [99/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.426 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.426 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.426 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.426 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.426 [104/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:11.426 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:11.426 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.426 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.426 [108/268] Linking static target lib/librte_eal.a 00:02:11.426 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.688 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.688 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.688 [112/268] Linking static target lib/librte_rcu.a 00:02:11.688 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.688 [114/268] Linking static target lib/librte_mempool.a 00:02:11.688 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:11.688 [116/268] Linking static target lib/librte_mbuf.a 00:02:11.688 [117/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.688 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.688 [119/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:11.688 [120/268] Linking target lib/librte_log.so.24.1 00:02:11.688 [121/268] Linking static target lib/librte_net.a 00:02:11.688 [122/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.946 [123/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.946 [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:11.946 [125/268] Linking static target lib/librte_meter.a 00:02:11.946 [126/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.946 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.946 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.946 [129/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:11.947 [130/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.947 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.947 [132/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.947 [133/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.947 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.947 [135/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.947 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.947 [137/268] Linking static target lib/librte_timer.a 00:02:11.947 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.947 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.947 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.947 [141/268] Linking static target lib/librte_cmdline.a 00:02:11.947 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.947 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.947 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:11.947 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.947 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.947 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.947 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.947 [149/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:11.947 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.947 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.947 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.947 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.947 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.947 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.947 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.947 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.947 [158/268] Linking static target lib/librte_dmadev.a 00:02:11.947 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.947 [160/268] Linking target lib/librte_kvargs.so.24.1 00:02:11.947 [161/268] Linking target lib/librte_telemetry.so.24.1 00:02:11.947 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.947 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:11.947 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.947 [165/268] Linking static target lib/librte_compressdev.a 00:02:11.947 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.947 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:11.947 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.947 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.947 [170/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.947 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.947 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.947 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.207 [174/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.207 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.207 [176/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:12.207 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.207 [178/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.207 [179/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.207 [180/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.207 [181/268] Linking static target lib/librte_power.a 00:02:12.207 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.207 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.207 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.207 [185/268] Linking static target lib/librte_reorder.a 00:02:12.207 [186/268] Linking static target lib/librte_hash.a 00:02:12.207 [187/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.207 [188/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:12.207 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.207 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.207 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.208 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.208 [193/268] Linking static target lib/librte_security.a 00:02:12.208 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.208 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:12.208 [196/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.208 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.208 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.208 [199/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.208 [200/268] Linking static target lib/librte_cryptodev.a 00:02:12.208 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.208 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.208 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.208 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.208 [205/268] Linking static target drivers/librte_bus_vdev.a 00:02:12.472 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.472 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.472 [208/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.472 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.472 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.472 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:12.472 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.472 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.472 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.472 [215/268] Linking static target drivers/librte_mempool_ring.a 00:02:12.472 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.472 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.789 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.789 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.789 [220/268] Linking static target lib/librte_ethdev.a 00:02:12.789 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.789 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.789 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.046 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.046 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.303 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:13.303 [227/268] Linking static target lib/librte_vhost.a 00:02:13.303 [228/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.303 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.675 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.241 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.801 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.708 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.708 [234/268] Linking target lib/librte_eal.so.24.1 00:02:23.968 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.968 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:23.968 [237/268] Linking target lib/librte_pci.so.24.1 00:02:23.968 [238/268] Linking target lib/librte_timer.so.24.1 00:02:23.968 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:23.968 [240/268] Linking target lib/librte_meter.so.24.1 00:02:23.968 [241/268] Linking target lib/librte_ring.so.24.1 00:02:24.228 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.228 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.228 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.228 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.228 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.228 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.228 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:24.228 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:24.487 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.487 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.487 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.487 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.487 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.746 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.746 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:24.746 [257/268] Linking target lib/librte_net.so.24.1 00:02:24.746 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.746 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.746 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.746 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.746 [262/268] Linking target lib/librte_hash.so.24.1 00:02:25.006 [263/268] Linking target lib/librte_security.so.24.1 00:02:25.006 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.006 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.006 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.006 [267/268] Linking target lib/librte_power.so.24.1 00:02:25.006 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.006 INFO: autodetecting backend as ninja 00:02:25.006 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:26.387 CC lib/ut/ut.o 00:02:26.387 CC lib/log/log_deprecated.o 00:02:26.387 CC lib/log/log.o 00:02:26.387 CC lib/log/log_flags.o 00:02:26.387 CC lib/ut_mock/mock.o 00:02:26.387 LIB libspdk_ut.a 00:02:26.387 LIB libspdk_log.a 00:02:26.387 LIB libspdk_ut_mock.a 00:02:26.647 CC lib/dma/dma.o 00:02:26.647 CC lib/util/bit_array.o 00:02:26.647 CC lib/util/base64.o 00:02:26.647 CC lib/ioat/ioat.o 00:02:26.647 CC lib/util/crc16.o 00:02:26.647 CXX lib/trace_parser/trace.o 00:02:26.647 CC lib/util/cpuset.o 00:02:26.647 CC lib/util/crc32.o 00:02:26.647 CC lib/util/crc32c.o 00:02:26.647 CC lib/util/crc32_ieee.o 00:02:26.647 CC lib/util/fd.o 00:02:26.647 CC lib/util/dif.o 00:02:26.647 CC lib/util/crc64.o 00:02:26.647 CC lib/util/fd_group.o 00:02:26.647 CC lib/util/hexlify.o 00:02:26.647 CC lib/util/file.o 00:02:26.647 CC lib/util/iov.o 00:02:26.647 CC lib/util/net.o 00:02:26.647 CC lib/util/math.o 00:02:26.647 CC lib/util/pipe.o 00:02:26.647 CC lib/util/strerror_tls.o 00:02:26.647 CC lib/util/string.o 00:02:26.647 CC lib/util/uuid.o 00:02:26.647 CC lib/util/xor.o 00:02:26.647 CC lib/util/md5.o 00:02:26.647 CC lib/util/zipf.o 00:02:26.647 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.647 CC lib/vfio_user/host/vfio_user.o 00:02:26.647 LIB libspdk_dma.a 00:02:26.647 LIB libspdk_ioat.a 00:02:26.907 LIB libspdk_vfio_user.a 00:02:26.907 LIB libspdk_util.a 00:02:27.165 LIB libspdk_trace_parser.a 00:02:27.165 CC lib/idxd/idxd.o 00:02:27.165 CC lib/idxd/idxd_user.o 00:02:27.165 CC lib/idxd/idxd_kernel.o 00:02:27.165 CC lib/rdma_utils/rdma_utils.o 00:02:27.165 CC lib/env_dpdk/env.o 00:02:27.165 CC lib/json/json_write.o 00:02:27.165 CC lib/env_dpdk/memory.o 00:02:27.165 CC lib/env_dpdk/pci.o 00:02:27.165 CC lib/json/json_parse.o 00:02:27.165 CC lib/rdma_provider/common.o 00:02:27.165 CC lib/json/json_util.o 00:02:27.165 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:27.165 CC lib/env_dpdk/init.o 00:02:27.165 CC lib/env_dpdk/threads.o 00:02:27.165 CC lib/vmd/vmd.o 00:02:27.165 CC lib/env_dpdk/pci_ioat.o 00:02:27.165 CC lib/env_dpdk/pci_virtio.o 00:02:27.165 CC lib/vmd/led.o 00:02:27.165 CC lib/conf/conf.o 00:02:27.165 CC lib/env_dpdk/pci_idxd.o 00:02:27.165 CC lib/env_dpdk/pci_vmd.o 00:02:27.165 CC lib/env_dpdk/pci_event.o 00:02:27.165 CC lib/env_dpdk/sigbus_handler.o 00:02:27.165 CC lib/env_dpdk/pci_dpdk.o 00:02:27.165 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:27.165 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:27.424 LIB libspdk_rdma_provider.a 00:02:27.424 LIB libspdk_conf.a 00:02:27.424 LIB libspdk_rdma_utils.a 00:02:27.424 LIB libspdk_json.a 00:02:27.424 LIB libspdk_idxd.a 00:02:27.424 LIB libspdk_vmd.a 00:02:27.684 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.684 CC lib/jsonrpc/jsonrpc_client.o 00:02:27.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.943 LIB libspdk_jsonrpc.a 00:02:28.202 LIB libspdk_env_dpdk.a 00:02:28.202 CC lib/rpc/rpc.o 00:02:28.202 LIB libspdk_rpc.a 00:02:28.461 CC lib/notify/notify_rpc.o 00:02:28.461 CC lib/notify/notify.o 00:02:28.719 CC lib/trace/trace_flags.o 00:02:28.719 CC lib/trace/trace.o 00:02:28.719 CC lib/trace/trace_rpc.o 00:02:28.719 CC lib/keyring/keyring.o 00:02:28.719 CC lib/keyring/keyring_rpc.o 00:02:28.719 LIB libspdk_notify.a 00:02:28.719 LIB libspdk_trace.a 00:02:28.719 LIB libspdk_keyring.a 00:02:28.978 CC lib/sock/sock.o 00:02:28.978 CC lib/sock/sock_rpc.o 00:02:28.978 CC lib/thread/thread.o 00:02:28.978 CC lib/thread/iobuf.o 00:02:29.236 LIB libspdk_sock.a 00:02:29.494 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.494 CC lib/nvme/nvme_ctrlr.o 00:02:29.494 CC lib/nvme/nvme_fabric.o 00:02:29.494 CC lib/nvme/nvme_ns_cmd.o 00:02:29.494 CC lib/nvme/nvme_ns.o 00:02:29.494 CC lib/nvme/nvme_pcie_common.o 00:02:29.494 CC lib/nvme/nvme_pcie.o 00:02:29.494 CC lib/nvme/nvme_quirks.o 00:02:29.494 CC lib/nvme/nvme_qpair.o 00:02:29.494 CC lib/nvme/nvme.o 00:02:29.752 CC lib/nvme/nvme_transport.o 00:02:29.752 CC lib/nvme/nvme_tcp.o 00:02:29.752 CC lib/nvme/nvme_discovery.o 00:02:29.752 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.752 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.752 CC lib/nvme/nvme_poll_group.o 00:02:29.752 CC lib/nvme/nvme_opal.o 00:02:29.752 CC lib/nvme/nvme_io_msg.o 00:02:29.752 CC lib/nvme/nvme_stubs.o 00:02:29.752 CC lib/nvme/nvme_zns.o 00:02:29.752 CC lib/nvme/nvme_auth.o 00:02:29.752 CC lib/nvme/nvme_cuse.o 00:02:29.752 CC lib/nvme/nvme_vfio_user.o 00:02:29.752 CC lib/nvme/nvme_rdma.o 00:02:30.009 LIB libspdk_thread.a 00:02:30.267 CC lib/blob/zeroes.o 00:02:30.267 CC lib/blob/blobstore.o 00:02:30.267 CC lib/blob/request.o 00:02:30.267 CC lib/fsdev/fsdev_io.o 00:02:30.267 CC lib/fsdev/fsdev.o 00:02:30.267 CC lib/blob/blob_bs_dev.o 00:02:30.267 CC lib/fsdev/fsdev_rpc.o 00:02:30.267 CC lib/virtio/virtio.o 00:02:30.267 CC lib/virtio/virtio_pci.o 00:02:30.267 CC lib/virtio/virtio_vhost_user.o 00:02:30.267 CC lib/virtio/virtio_vfio_user.o 00:02:30.267 CC lib/init/json_config.o 00:02:30.267 CC lib/init/subsystem.o 00:02:30.267 CC lib/init/subsystem_rpc.o 00:02:30.267 CC lib/init/rpc.o 00:02:30.267 CC lib/vfu_tgt/tgt_endpoint.o 00:02:30.267 CC lib/vfu_tgt/tgt_rpc.o 00:02:30.267 CC lib/accel/accel_sw.o 00:02:30.267 CC lib/accel/accel.o 00:02:30.267 CC lib/accel/accel_rpc.o 00:02:30.267 LIB libspdk_init.a 00:02:30.267 LIB libspdk_virtio.a 00:02:30.526 LIB libspdk_vfu_tgt.a 00:02:30.526 LIB libspdk_fsdev.a 00:02:30.526 CC lib/event/log_rpc.o 00:02:30.526 CC lib/event/app.o 00:02:30.526 CC lib/event/reactor.o 00:02:30.526 CC lib/event/app_rpc.o 00:02:30.526 CC lib/event/scheduler_static.o 00:02:30.785 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:30.785 LIB libspdk_event.a 00:02:31.043 LIB libspdk_accel.a 00:02:31.043 LIB libspdk_nvme.a 00:02:31.302 LIB libspdk_fuse_dispatcher.a 00:02:31.302 CC lib/bdev/bdev.o 00:02:31.302 CC lib/bdev/bdev_rpc.o 00:02:31.302 CC lib/bdev/bdev_zone.o 00:02:31.302 CC lib/bdev/part.o 00:02:31.302 CC lib/bdev/scsi_nvme.o 00:02:31.869 LIB libspdk_blob.a 00:02:32.128 CC lib/blobfs/blobfs.o 00:02:32.128 CC lib/blobfs/tree.o 00:02:32.128 CC lib/lvol/lvol.o 00:02:32.697 LIB libspdk_lvol.a 00:02:32.697 LIB libspdk_blobfs.a 00:02:32.955 LIB libspdk_bdev.a 00:02:33.216 CC lib/ublk/ublk.o 00:02:33.216 CC lib/nbd/nbd_rpc.o 00:02:33.216 CC lib/ublk/ublk_rpc.o 00:02:33.216 CC lib/nbd/nbd.o 00:02:33.216 CC lib/scsi/dev.o 00:02:33.216 CC lib/scsi/lun.o 00:02:33.216 CC lib/scsi/port.o 00:02:33.216 CC lib/scsi/scsi_bdev.o 00:02:33.216 CC lib/scsi/scsi_pr.o 00:02:33.216 CC lib/scsi/scsi.o 00:02:33.216 CC lib/scsi/scsi_rpc.o 00:02:33.216 CC lib/scsi/task.o 00:02:33.216 CC lib/ftl/ftl_core.o 00:02:33.216 CC lib/ftl/ftl_init.o 00:02:33.216 CC lib/ftl/ftl_io.o 00:02:33.216 CC lib/ftl/ftl_layout.o 00:02:33.216 CC lib/ftl/ftl_debug.o 00:02:33.216 CC lib/nvmf/ctrlr.o 00:02:33.216 CC lib/ftl/ftl_sb.o 00:02:33.216 CC lib/nvmf/ctrlr_discovery.o 00:02:33.216 CC lib/ftl/ftl_l2p.o 00:02:33.216 CC lib/nvmf/ctrlr_bdev.o 00:02:33.216 CC lib/ftl/ftl_l2p_flat.o 00:02:33.216 CC lib/nvmf/subsystem.o 00:02:33.216 CC lib/ftl/ftl_nv_cache.o 00:02:33.216 CC lib/ftl/ftl_band.o 00:02:33.216 CC lib/nvmf/nvmf.o 00:02:33.216 CC lib/ftl/ftl_band_ops.o 00:02:33.216 CC lib/ftl/ftl_reloc.o 00:02:33.216 CC lib/ftl/ftl_writer.o 00:02:33.216 CC lib/ftl/ftl_rq.o 00:02:33.216 CC lib/nvmf/nvmf_rpc.o 00:02:33.216 CC lib/ftl/ftl_l2p_cache.o 00:02:33.216 CC lib/nvmf/tcp.o 00:02:33.216 CC lib/nvmf/transport.o 00:02:33.216 CC lib/ftl/ftl_p2l.o 00:02:33.216 CC lib/nvmf/stubs.o 00:02:33.216 CC lib/ftl/ftl_p2l_log.o 00:02:33.216 CC lib/nvmf/mdns_server.o 00:02:33.216 CC lib/nvmf/rdma.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.216 CC lib/nvmf/auth.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.216 CC lib/nvmf/vfio_user.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.216 CC lib/ftl/utils/ftl_conf.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.216 CC lib/ftl/utils/ftl_md.o 00:02:33.216 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.216 CC lib/ftl/utils/ftl_mempool.o 00:02:33.216 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.216 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.216 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.216 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.216 CC lib/ftl/utils/ftl_property.o 00:02:33.216 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.216 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.216 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.216 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:33.216 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.216 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.216 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.216 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.217 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:33.474 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:33.475 CC lib/ftl/base/ftl_base_dev.o 00:02:33.475 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.475 CC lib/ftl/ftl_trace.o 00:02:33.733 LIB libspdk_nbd.a 00:02:33.733 LIB libspdk_scsi.a 00:02:33.733 LIB libspdk_ublk.a 00:02:33.991 CC lib/vhost/vhost_rpc.o 00:02:33.991 CC lib/vhost/vhost.o 00:02:33.991 CC lib/vhost/vhost_blk.o 00:02:33.991 CC lib/vhost/vhost_scsi.o 00:02:33.991 CC lib/vhost/rte_vhost_user.o 00:02:33.991 CC lib/iscsi/init_grp.o 00:02:33.991 CC lib/iscsi/conn.o 00:02:33.991 CC lib/iscsi/param.o 00:02:33.991 CC lib/iscsi/iscsi.o 00:02:33.991 CC lib/iscsi/portal_grp.o 00:02:33.991 CC lib/iscsi/tgt_node.o 00:02:33.991 CC lib/iscsi/iscsi_subsystem.o 00:02:33.991 CC lib/iscsi/iscsi_rpc.o 00:02:33.991 CC lib/iscsi/task.o 00:02:33.991 LIB libspdk_ftl.a 00:02:34.556 LIB libspdk_nvmf.a 00:02:34.556 LIB libspdk_vhost.a 00:02:34.814 LIB libspdk_iscsi.a 00:02:35.380 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.380 CC module/vfu_device/vfu_virtio_blk.o 00:02:35.380 CC module/vfu_device/vfu_virtio.o 00:02:35.380 CC module/vfu_device/vfu_virtio_rpc.o 00:02:35.380 CC module/vfu_device/vfu_virtio_scsi.o 00:02:35.380 CC module/vfu_device/vfu_virtio_fs.o 00:02:35.380 CC module/accel/iaa/accel_iaa.o 00:02:35.380 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.380 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.380 CC module/blob/bdev/blob_bdev.o 00:02:35.380 LIB libspdk_env_dpdk_rpc.a 00:02:35.380 CC module/fsdev/aio/fsdev_aio.o 00:02:35.380 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:35.380 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.380 CC module/fsdev/aio/linux_aio_mgr.o 00:02:35.380 CC module/accel/dsa/accel_dsa.o 00:02:35.380 CC module/accel/error/accel_error_rpc.o 00:02:35.380 CC module/accel/ioat/accel_ioat.o 00:02:35.380 CC module/accel/error/accel_error.o 00:02:35.380 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.380 CC module/sock/posix/posix.o 00:02:35.380 CC module/keyring/linux/keyring_rpc.o 00:02:35.380 CC module/keyring/linux/keyring.o 00:02:35.380 CC module/keyring/file/keyring.o 00:02:35.380 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.380 CC module/keyring/file/keyring_rpc.o 00:02:35.380 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.380 LIB libspdk_scheduler_gscheduler.a 00:02:35.380 LIB libspdk_keyring_linux.a 00:02:35.380 LIB libspdk_keyring_file.a 00:02:35.380 LIB libspdk_accel_iaa.a 00:02:35.638 LIB libspdk_accel_error.a 00:02:35.638 LIB libspdk_accel_ioat.a 00:02:35.638 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.638 LIB libspdk_scheduler_dynamic.a 00:02:35.638 LIB libspdk_blob_bdev.a 00:02:35.638 LIB libspdk_accel_dsa.a 00:02:35.638 LIB libspdk_vfu_device.a 00:02:35.900 LIB libspdk_fsdev_aio.a 00:02:35.900 LIB libspdk_sock_posix.a 00:02:35.900 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.900 CC module/bdev/gpt/gpt.o 00:02:35.900 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.900 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.900 CC module/bdev/aio/bdev_aio_rpc.o 00:02:35.900 CC module/bdev/aio/bdev_aio.o 00:02:35.900 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.900 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.900 CC module/bdev/delay/vbdev_delay.o 00:02:35.900 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.900 CC module/bdev/error/vbdev_error.o 00:02:35.900 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.900 CC module/bdev/iscsi/bdev_iscsi.o 00:02:35.900 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:35.900 CC module/bdev/malloc/bdev_malloc.o 00:02:35.900 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.900 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.900 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:35.900 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:35.900 CC module/bdev/nvme/bdev_nvme.o 00:02:35.900 CC module/bdev/nvme/nvme_rpc.o 00:02:35.900 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:35.900 CC module/bdev/nvme/bdev_mdns_client.o 00:02:35.900 CC module/bdev/split/vbdev_split.o 00:02:35.900 CC module/bdev/nvme/vbdev_opal.o 00:02:35.900 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:35.900 CC module/bdev/split/vbdev_split_rpc.o 00:02:35.900 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:35.900 CC module/bdev/raid/bdev_raid.o 00:02:35.900 CC module/bdev/raid/raid0.o 00:02:35.900 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.900 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.900 CC module/bdev/passthru/vbdev_passthru.o 00:02:35.900 CC module/bdev/ftl/bdev_ftl.o 00:02:35.900 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.900 CC module/bdev/raid/concat.o 00:02:35.900 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:35.900 CC module/bdev/raid/raid1.o 00:02:35.900 CC module/bdev/null/bdev_null.o 00:02:35.900 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:35.900 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.900 CC module/bdev/null/bdev_null_rpc.o 00:02:36.159 LIB libspdk_blobfs_bdev.a 00:02:36.159 LIB libspdk_bdev_gpt.a 00:02:36.159 LIB libspdk_bdev_error.a 00:02:36.159 LIB libspdk_bdev_split.a 00:02:36.159 LIB libspdk_bdev_aio.a 00:02:36.159 LIB libspdk_bdev_delay.a 00:02:36.159 LIB libspdk_bdev_zone_block.a 00:02:36.159 LIB libspdk_bdev_malloc.a 00:02:36.159 LIB libspdk_bdev_iscsi.a 00:02:36.159 LIB libspdk_bdev_ftl.a 00:02:36.159 LIB libspdk_bdev_null.a 00:02:36.159 LIB libspdk_bdev_passthru.a 00:02:36.419 LIB libspdk_bdev_lvol.a 00:02:36.419 LIB libspdk_bdev_virtio.a 00:02:36.679 LIB libspdk_bdev_raid.a 00:02:37.248 LIB libspdk_bdev_nvme.a 00:02:37.817 CC module/event/subsystems/keyring/keyring.o 00:02:37.817 CC module/event/subsystems/vmd/vmd.o 00:02:37.817 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.817 CC module/event/subsystems/sock/sock.o 00:02:37.817 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.817 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.817 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.817 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.817 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.817 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.817 LIB libspdk_event_keyring.a 00:02:37.817 LIB libspdk_event_vmd.a 00:02:38.076 LIB libspdk_event_sock.a 00:02:38.076 LIB libspdk_event_vhost_blk.a 00:02:38.076 LIB libspdk_event_scheduler.a 00:02:38.076 LIB libspdk_event_iobuf.a 00:02:38.076 LIB libspdk_event_vfu_tgt.a 00:02:38.076 LIB libspdk_event_fsdev.a 00:02:38.335 CC module/event/subsystems/accel/accel.o 00:02:38.335 LIB libspdk_event_accel.a 00:02:38.594 CC module/event/subsystems/bdev/bdev.o 00:02:38.854 LIB libspdk_event_bdev.a 00:02:39.113 CC module/event/subsystems/scsi/scsi.o 00:02:39.113 CC module/event/subsystems/ublk/ublk.o 00:02:39.113 CC module/event/subsystems/nbd/nbd.o 00:02:39.113 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.113 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.113 LIB libspdk_event_ublk.a 00:02:39.113 LIB libspdk_event_scsi.a 00:02:39.113 LIB libspdk_event_nbd.a 00:02:39.373 LIB libspdk_event_nvmf.a 00:02:39.373 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.373 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.633 LIB libspdk_event_vhost_scsi.a 00:02:39.633 LIB libspdk_event_iscsi.a 00:02:39.900 CC app/spdk_nvme_perf/perf.o 00:02:39.900 CC test/rpc_client/rpc_client_test.o 00:02:39.900 CC app/spdk_top/spdk_top.o 00:02:39.900 CXX app/trace/trace.o 00:02:39.900 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.900 CC app/spdk_lspci/spdk_lspci.o 00:02:39.900 CC app/trace_record/trace_record.o 00:02:39.900 CC app/spdk_nvme_identify/identify.o 00:02:39.900 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.900 TEST_HEADER include/spdk/accel.h 00:02:39.900 TEST_HEADER include/spdk/assert.h 00:02:39.900 TEST_HEADER include/spdk/accel_module.h 00:02:39.900 TEST_HEADER include/spdk/barrier.h 00:02:39.900 TEST_HEADER include/spdk/bdev.h 00:02:39.900 TEST_HEADER include/spdk/base64.h 00:02:39.900 TEST_HEADER include/spdk/bdev_module.h 00:02:39.900 TEST_HEADER include/spdk/bit_array.h 00:02:39.900 TEST_HEADER include/spdk/bit_pool.h 00:02:39.900 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.900 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.900 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.900 TEST_HEADER include/spdk/blobfs.h 00:02:39.900 TEST_HEADER include/spdk/blob.h 00:02:39.900 TEST_HEADER include/spdk/conf.h 00:02:39.900 TEST_HEADER include/spdk/cpuset.h 00:02:39.900 TEST_HEADER include/spdk/config.h 00:02:39.900 TEST_HEADER include/spdk/crc16.h 00:02:39.900 TEST_HEADER include/spdk/crc32.h 00:02:39.900 TEST_HEADER include/spdk/dif.h 00:02:39.900 TEST_HEADER include/spdk/crc64.h 00:02:39.900 TEST_HEADER include/spdk/dma.h 00:02:39.900 TEST_HEADER include/spdk/endian.h 00:02:39.900 TEST_HEADER include/spdk/env.h 00:02:39.900 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.900 TEST_HEADER include/spdk/fd_group.h 00:02:39.900 TEST_HEADER include/spdk/event.h 00:02:39.900 TEST_HEADER include/spdk/fd.h 00:02:39.900 TEST_HEADER include/spdk/file.h 00:02:39.900 TEST_HEADER include/spdk/fsdev.h 00:02:39.900 TEST_HEADER include/spdk/fsdev_module.h 00:02:39.900 TEST_HEADER include/spdk/ftl.h 00:02:39.900 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:39.900 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.900 TEST_HEADER include/spdk/hexlify.h 00:02:39.900 TEST_HEADER include/spdk/histogram_data.h 00:02:39.900 TEST_HEADER include/spdk/idxd.h 00:02:39.900 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.900 TEST_HEADER include/spdk/init.h 00:02:39.900 CC app/nvmf_tgt/nvmf_main.o 00:02:39.900 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.900 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.900 TEST_HEADER include/spdk/ioat.h 00:02:39.900 TEST_HEADER include/spdk/json.h 00:02:39.900 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.900 TEST_HEADER include/spdk/keyring.h 00:02:39.900 TEST_HEADER include/spdk/keyring_module.h 00:02:39.900 TEST_HEADER include/spdk/log.h 00:02:39.900 TEST_HEADER include/spdk/lvol.h 00:02:39.900 TEST_HEADER include/spdk/likely.h 00:02:39.900 TEST_HEADER include/spdk/md5.h 00:02:39.900 TEST_HEADER include/spdk/memory.h 00:02:39.900 TEST_HEADER include/spdk/mmio.h 00:02:39.900 TEST_HEADER include/spdk/nbd.h 00:02:39.900 TEST_HEADER include/spdk/net.h 00:02:39.900 TEST_HEADER include/spdk/notify.h 00:02:39.900 TEST_HEADER include/spdk/nvme.h 00:02:39.900 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.900 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.900 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.900 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.900 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.900 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.900 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.900 TEST_HEADER include/spdk/nvmf.h 00:02:39.900 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.900 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.900 CC app/spdk_dd/spdk_dd.o 00:02:39.900 TEST_HEADER include/spdk/opal.h 00:02:39.900 TEST_HEADER include/spdk/opal_spec.h 00:02:39.901 TEST_HEADER include/spdk/pci_ids.h 00:02:39.901 TEST_HEADER include/spdk/pipe.h 00:02:39.901 TEST_HEADER include/spdk/queue.h 00:02:39.901 TEST_HEADER include/spdk/reduce.h 00:02:39.901 TEST_HEADER include/spdk/rpc.h 00:02:39.901 TEST_HEADER include/spdk/scheduler.h 00:02:39.901 TEST_HEADER include/spdk/scsi.h 00:02:39.901 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.901 TEST_HEADER include/spdk/sock.h 00:02:39.901 TEST_HEADER include/spdk/stdinc.h 00:02:39.901 TEST_HEADER include/spdk/string.h 00:02:39.901 TEST_HEADER include/spdk/thread.h 00:02:39.901 TEST_HEADER include/spdk/trace.h 00:02:39.901 CC examples/util/zipf/zipf.o 00:02:39.901 TEST_HEADER include/spdk/tree.h 00:02:39.901 TEST_HEADER include/spdk/trace_parser.h 00:02:39.901 TEST_HEADER include/spdk/ublk.h 00:02:39.901 TEST_HEADER include/spdk/util.h 00:02:39.901 TEST_HEADER include/spdk/version.h 00:02:39.901 TEST_HEADER include/spdk/uuid.h 00:02:39.901 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.901 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.901 TEST_HEADER include/spdk/vhost.h 00:02:39.901 TEST_HEADER include/spdk/vmd.h 00:02:39.901 TEST_HEADER include/spdk/xor.h 00:02:39.901 TEST_HEADER include/spdk/zipf.h 00:02:39.901 CXX test/cpp_headers/accel.o 00:02:39.901 CXX test/cpp_headers/accel_module.o 00:02:39.901 CXX test/cpp_headers/assert.o 00:02:39.901 CXX test/cpp_headers/barrier.o 00:02:39.901 CXX test/cpp_headers/base64.o 00:02:39.901 CXX test/cpp_headers/bdev.o 00:02:39.901 CC examples/ioat/verify/verify.o 00:02:39.901 CXX test/cpp_headers/bdev_zone.o 00:02:39.901 CXX test/cpp_headers/bdev_module.o 00:02:39.901 CXX test/cpp_headers/bit_pool.o 00:02:39.901 CXX test/cpp_headers/bit_array.o 00:02:39.901 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.901 CXX test/cpp_headers/blob_bdev.o 00:02:39.901 CXX test/cpp_headers/blobfs.o 00:02:39.901 CXX test/cpp_headers/blob.o 00:02:39.901 CXX test/cpp_headers/conf.o 00:02:39.901 CC examples/ioat/perf/perf.o 00:02:39.901 CXX test/cpp_headers/config.o 00:02:39.901 CXX test/cpp_headers/cpuset.o 00:02:39.901 CXX test/cpp_headers/crc16.o 00:02:39.901 CXX test/cpp_headers/crc64.o 00:02:39.901 CXX test/cpp_headers/crc32.o 00:02:39.901 CXX test/cpp_headers/dif.o 00:02:39.901 CXX test/cpp_headers/endian.o 00:02:39.901 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.901 CXX test/cpp_headers/dma.o 00:02:39.901 CC app/fio/nvme/fio_plugin.o 00:02:39.901 CXX test/cpp_headers/env.o 00:02:39.901 CXX test/cpp_headers/env_dpdk.o 00:02:39.901 CXX test/cpp_headers/event.o 00:02:39.901 CXX test/cpp_headers/fd_group.o 00:02:39.901 CXX test/cpp_headers/fd.o 00:02:39.901 CXX test/cpp_headers/file.o 00:02:39.901 CXX test/cpp_headers/fsdev.o 00:02:39.901 CXX test/cpp_headers/fsdev_module.o 00:02:39.901 CXX test/cpp_headers/ftl.o 00:02:39.901 CXX test/cpp_headers/fuse_dispatcher.o 00:02:39.901 CXX test/cpp_headers/gpt_spec.o 00:02:39.901 CXX test/cpp_headers/hexlify.o 00:02:39.901 CXX test/cpp_headers/histogram_data.o 00:02:39.901 CXX test/cpp_headers/idxd.o 00:02:39.901 CXX test/cpp_headers/idxd_spec.o 00:02:39.901 CXX test/cpp_headers/init.o 00:02:39.901 CC test/thread/poller_perf/poller_perf.o 00:02:39.901 CXX test/cpp_headers/ioat.o 00:02:39.901 CXX test/cpp_headers/ioat_spec.o 00:02:39.901 CC test/thread/lock/spdk_lock.o 00:02:39.901 CC test/app/stub/stub.o 00:02:39.901 CC app/spdk_tgt/spdk_tgt.o 00:02:39.901 CC test/app/jsoncat/jsoncat.o 00:02:39.901 CC test/app/histogram_perf/histogram_perf.o 00:02:39.901 CC test/env/vtophys/vtophys.o 00:02:39.901 CC test/env/memory/memory_ut.o 00:02:39.901 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.901 CXX test/cpp_headers/iscsi_spec.o 00:02:39.901 CC test/env/pci/pci_ut.o 00:02:40.164 CC app/fio/bdev/fio_plugin.o 00:02:40.164 LINK spdk_lspci 00:02:40.164 CC test/app/bdev_svc/bdev_svc.o 00:02:40.164 CC test/dma/test_dma/test_dma.o 00:02:40.164 LINK rpc_client_test 00:02:40.164 LINK spdk_nvme_discover 00:02:40.164 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.164 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:40.164 LINK interrupt_tgt 00:02:40.164 LINK zipf 00:02:40.164 LINK spdk_trace_record 00:02:40.164 CXX test/cpp_headers/json.o 00:02:40.164 LINK jsoncat 00:02:40.164 LINK histogram_perf 00:02:40.164 LINK poller_perf 00:02:40.164 CXX test/cpp_headers/jsonrpc.o 00:02:40.164 CXX test/cpp_headers/keyring.o 00:02:40.164 CXX test/cpp_headers/keyring_module.o 00:02:40.164 CXX test/cpp_headers/likely.o 00:02:40.164 CXX test/cpp_headers/log.o 00:02:40.164 CXX test/cpp_headers/lvol.o 00:02:40.164 CXX test/cpp_headers/md5.o 00:02:40.164 LINK nvmf_tgt 00:02:40.164 LINK vtophys 00:02:40.164 CXX test/cpp_headers/memory.o 00:02:40.164 CXX test/cpp_headers/mmio.o 00:02:40.164 CXX test/cpp_headers/nbd.o 00:02:40.164 CXX test/cpp_headers/net.o 00:02:40.164 CXX test/cpp_headers/notify.o 00:02:40.164 CXX test/cpp_headers/nvme.o 00:02:40.164 CXX test/cpp_headers/nvme_intel.o 00:02:40.164 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.164 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.164 CXX test/cpp_headers/nvme_spec.o 00:02:40.164 CXX test/cpp_headers/nvme_zns.o 00:02:40.164 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.164 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.164 CXX test/cpp_headers/nvmf.o 00:02:40.164 CXX test/cpp_headers/nvmf_spec.o 00:02:40.164 CXX test/cpp_headers/nvmf_transport.o 00:02:40.164 CXX test/cpp_headers/opal.o 00:02:40.164 CXX test/cpp_headers/opal_spec.o 00:02:40.164 CXX test/cpp_headers/pci_ids.o 00:02:40.164 CXX test/cpp_headers/pipe.o 00:02:40.164 CXX test/cpp_headers/queue.o 00:02:40.164 CXX test/cpp_headers/reduce.o 00:02:40.164 CXX test/cpp_headers/rpc.o 00:02:40.164 CXX test/cpp_headers/scheduler.o 00:02:40.164 CXX test/cpp_headers/scsi.o 00:02:40.164 LINK verify 00:02:40.164 LINK env_dpdk_post_init 00:02:40.164 CXX test/cpp_headers/scsi_spec.o 00:02:40.164 LINK stub 00:02:40.164 CXX test/cpp_headers/sock.o 00:02:40.164 CXX test/cpp_headers/stdinc.o 00:02:40.164 CXX test/cpp_headers/string.o 00:02:40.164 LINK ioat_perf 00:02:40.164 CXX test/cpp_headers/thread.o 00:02:40.164 CXX test/cpp_headers/trace.o 00:02:40.164 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:40.164 LINK iscsi_tgt 00:02:40.424 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:40.424 LINK spdk_tgt 00:02:40.424 LINK bdev_svc 00:02:40.424 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:40.424 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:40.424 LINK spdk_trace 00:02:40.424 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:40.424 CXX test/cpp_headers/trace_parser.o 00:02:40.424 CXX test/cpp_headers/tree.o 00:02:40.424 CXX test/cpp_headers/ublk.o 00:02:40.424 CXX test/cpp_headers/util.o 00:02:40.424 CXX test/cpp_headers/uuid.o 00:02:40.424 CXX test/cpp_headers/version.o 00:02:40.424 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.424 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.424 CXX test/cpp_headers/vhost.o 00:02:40.424 CXX test/cpp_headers/vmd.o 00:02:40.424 CXX test/cpp_headers/xor.o 00:02:40.424 CXX test/cpp_headers/zipf.o 00:02:40.424 LINK spdk_dd 00:02:40.685 LINK nvme_fuzz 00:02:40.685 LINK pci_ut 00:02:40.685 LINK spdk_bdev 00:02:40.685 LINK spdk_nvme_identify 00:02:40.685 LINK llvm_vfio_fuzz 00:02:40.685 LINK test_dma 00:02:40.685 LINK spdk_nvme 00:02:40.685 LINK spdk_nvme_perf 00:02:40.944 LINK vhost_fuzz 00:02:40.944 CC examples/vmd/led/led.o 00:02:40.944 CC examples/sock/hello_world/hello_sock.o 00:02:40.944 CC examples/idxd/perf/perf.o 00:02:40.944 LINK mem_callbacks 00:02:40.944 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.944 LINK spdk_top 00:02:40.944 LINK llvm_nvme_fuzz 00:02:40.944 CC examples/thread/thread/thread_ex.o 00:02:40.944 CC app/vhost/vhost.o 00:02:40.944 LINK lsvmd 00:02:40.944 LINK led 00:02:41.207 LINK hello_sock 00:02:41.207 LINK idxd_perf 00:02:41.207 LINK thread 00:02:41.207 LINK memory_ut 00:02:41.207 LINK vhost 00:02:41.207 LINK spdk_lock 00:02:41.601 LINK iscsi_fuzz 00:02:41.933 CC examples/nvme/hello_world/hello_world.o 00:02:41.933 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.933 CC examples/nvme/reconnect/reconnect.o 00:02:41.933 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.933 CC examples/nvme/arbitration/arbitration.o 00:02:41.933 CC examples/nvme/abort/abort.o 00:02:41.933 CC examples/nvme/hotplug/hotplug.o 00:02:41.933 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.933 CC test/event/reactor_perf/reactor_perf.o 00:02:41.933 CC test/event/event_perf/event_perf.o 00:02:41.933 CC test/event/reactor/reactor.o 00:02:41.933 CC test/event/app_repeat/app_repeat.o 00:02:41.933 CC test/event/scheduler/scheduler.o 00:02:41.933 LINK pmr_persistence 00:02:41.933 LINK reactor_perf 00:02:41.933 LINK hello_world 00:02:41.933 LINK cmb_copy 00:02:41.933 LINK event_perf 00:02:41.933 LINK reactor 00:02:41.933 LINK hotplug 00:02:41.933 LINK app_repeat 00:02:41.933 LINK reconnect 00:02:42.191 LINK abort 00:02:42.191 LINK arbitration 00:02:42.191 LINK scheduler 00:02:42.191 LINK nvme_manage 00:02:42.449 CC test/nvme/aer/aer.o 00:02:42.449 CC test/nvme/compliance/nvme_compliance.o 00:02:42.449 CC test/nvme/reserve/reserve.o 00:02:42.449 CC test/nvme/reset/reset.o 00:02:42.449 CC test/nvme/simple_copy/simple_copy.o 00:02:42.449 CC test/nvme/connect_stress/connect_stress.o 00:02:42.449 CC test/nvme/e2edp/nvme_dp.o 00:02:42.449 CC test/nvme/overhead/overhead.o 00:02:42.449 CC test/nvme/startup/startup.o 00:02:42.449 CC test/nvme/boot_partition/boot_partition.o 00:02:42.449 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.449 CC test/nvme/err_injection/err_injection.o 00:02:42.449 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.449 CC test/nvme/cuse/cuse.o 00:02:42.449 CC test/nvme/sgl/sgl.o 00:02:42.449 CC test/nvme/fdp/fdp.o 00:02:42.449 CC test/blobfs/mkfs/mkfs.o 00:02:42.449 CC test/accel/dif/dif.o 00:02:42.449 CC test/lvol/esnap/esnap.o 00:02:42.449 LINK boot_partition 00:02:42.449 LINK startup 00:02:42.449 LINK connect_stress 00:02:42.449 LINK reserve 00:02:42.449 LINK doorbell_aers 00:02:42.449 LINK err_injection 00:02:42.449 LINK fused_ordering 00:02:42.449 LINK simple_copy 00:02:42.449 LINK aer 00:02:42.449 LINK reset 00:02:42.449 LINK nvme_dp 00:02:42.449 LINK mkfs 00:02:42.449 LINK overhead 00:02:42.708 LINK sgl 00:02:42.708 LINK fdp 00:02:42.708 LINK nvme_compliance 00:02:42.708 CC examples/accel/perf/accel_perf.o 00:02:42.708 CC examples/blob/cli/blobcli.o 00:02:42.708 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:42.708 CC examples/blob/hello_world/hello_blob.o 00:02:42.968 LINK dif 00:02:42.968 LINK hello_fsdev 00:02:42.968 LINK hello_blob 00:02:42.968 LINK accel_perf 00:02:43.226 LINK blobcli 00:02:43.226 LINK cuse 00:02:43.794 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.794 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.053 LINK hello_bdev 00:02:44.313 LINK bdevperf 00:02:44.571 CC test/bdev/bdevio/bdevio.o 00:02:44.829 LINK bdevio 00:02:45.764 CC examples/nvmf/nvmf/nvmf.o 00:02:46.023 LINK esnap 00:02:46.023 LINK nvmf 00:02:47.401 00:02:47.401 real 0m46.106s 00:02:47.401 user 6m52.633s 00:02:47.401 sys 2m17.908s 00:02:47.401 09:21:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:47.401 09:21:42 make -- common/autotest_common.sh@10 -- $ set +x 00:02:47.401 ************************************ 00:02:47.401 END TEST make 00:02:47.401 ************************************ 00:02:47.401 09:21:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:47.401 09:21:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:47.401 09:21:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:47.401 09:21:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.401 09:21:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:47.401 09:21:42 -- pm/common@44 -- $ pid=365384 00:02:47.401 09:21:42 -- pm/common@50 -- $ kill -TERM 365384 00:02:47.401 09:21:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.401 09:21:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:47.401 09:21:42 -- pm/common@44 -- $ pid=365386 00:02:47.401 09:21:42 -- pm/common@50 -- $ kill -TERM 365386 00:02:47.401 09:21:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.401 09:21:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:47.401 09:21:42 -- pm/common@44 -- $ pid=365388 00:02:47.401 09:21:42 -- pm/common@50 -- $ kill -TERM 365388 00:02:47.401 09:21:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.401 09:21:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:47.401 09:21:42 -- pm/common@44 -- $ pid=365411 00:02:47.401 09:21:42 -- pm/common@50 -- $ sudo -E kill -TERM 365411 00:02:47.401 09:21:42 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:47.401 09:21:42 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:47.401 09:21:42 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:47.661 09:21:43 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:47.661 09:21:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:47.661 09:21:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:47.661 09:21:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:47.661 09:21:43 -- scripts/common.sh@336 -- # IFS=.-: 00:02:47.661 09:21:43 -- scripts/common.sh@336 -- # read -ra ver1 00:02:47.661 09:21:43 -- scripts/common.sh@337 -- # IFS=.-: 00:02:47.661 09:21:43 -- scripts/common.sh@337 -- # read -ra ver2 00:02:47.661 09:21:43 -- scripts/common.sh@338 -- # local 'op=<' 00:02:47.661 09:21:43 -- scripts/common.sh@340 -- # ver1_l=2 00:02:47.661 09:21:43 -- scripts/common.sh@341 -- # ver2_l=1 00:02:47.661 09:21:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:47.661 09:21:43 -- scripts/common.sh@344 -- # case "$op" in 00:02:47.661 09:21:43 -- scripts/common.sh@345 -- # : 1 00:02:47.661 09:21:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:47.661 09:21:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.661 09:21:43 -- scripts/common.sh@365 -- # decimal 1 00:02:47.661 09:21:43 -- scripts/common.sh@353 -- # local d=1 00:02:47.661 09:21:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:47.661 09:21:43 -- scripts/common.sh@355 -- # echo 1 00:02:47.661 09:21:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:47.661 09:21:43 -- scripts/common.sh@366 -- # decimal 2 00:02:47.661 09:21:43 -- scripts/common.sh@353 -- # local d=2 00:02:47.661 09:21:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:47.661 09:21:43 -- scripts/common.sh@355 -- # echo 2 00:02:47.661 09:21:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:47.661 09:21:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:47.661 09:21:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:47.661 09:21:43 -- scripts/common.sh@368 -- # return 0 00:02:47.661 09:21:43 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:47.661 09:21:43 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.661 --rc genhtml_branch_coverage=1 00:02:47.661 --rc genhtml_function_coverage=1 00:02:47.661 --rc genhtml_legend=1 00:02:47.661 --rc geninfo_all_blocks=1 00:02:47.661 --rc geninfo_unexecuted_blocks=1 00:02:47.661 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.661 ' 00:02:47.661 09:21:43 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.661 --rc genhtml_branch_coverage=1 00:02:47.661 --rc genhtml_function_coverage=1 00:02:47.661 --rc genhtml_legend=1 00:02:47.661 --rc geninfo_all_blocks=1 00:02:47.661 --rc geninfo_unexecuted_blocks=1 00:02:47.661 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.661 ' 00:02:47.661 09:21:43 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.661 --rc genhtml_branch_coverage=1 00:02:47.661 --rc genhtml_function_coverage=1 00:02:47.661 --rc genhtml_legend=1 00:02:47.661 --rc geninfo_all_blocks=1 00:02:47.661 --rc geninfo_unexecuted_blocks=1 00:02:47.661 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.661 ' 00:02:47.661 09:21:43 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.661 --rc genhtml_branch_coverage=1 00:02:47.661 --rc genhtml_function_coverage=1 00:02:47.661 --rc genhtml_legend=1 00:02:47.661 --rc geninfo_all_blocks=1 00:02:47.661 --rc geninfo_unexecuted_blocks=1 00:02:47.661 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:47.661 ' 00:02:47.661 09:21:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.661 09:21:43 -- nvmf/common.sh@7 -- # uname -s 00:02:47.661 09:21:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.661 09:21:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.661 09:21:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.661 09:21:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.661 09:21:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.661 09:21:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.661 09:21:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.661 09:21:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.661 09:21:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.661 09:21:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.661 09:21:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:02:47.661 09:21:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:02:47.661 09:21:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.661 09:21:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.661 09:21:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:47.661 09:21:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.661 09:21:43 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:47.661 09:21:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.661 09:21:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.661 09:21:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.661 09:21:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.661 09:21:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.661 09:21:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.661 09:21:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.661 09:21:43 -- paths/export.sh@5 -- # export PATH 00:02:47.661 09:21:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.661 09:21:43 -- nvmf/common.sh@51 -- # : 0 00:02:47.661 09:21:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.661 09:21:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.661 09:21:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.661 09:21:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.662 09:21:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.662 09:21:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.662 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.662 09:21:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.662 09:21:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.662 09:21:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.662 09:21:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.662 09:21:43 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.662 09:21:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.662 09:21:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.662 09:21:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:47.662 09:21:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.662 09:21:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:47.662 09:21:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.662 09:21:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.662 09:21:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.662 09:21:43 -- spdk/autotest.sh@48 -- # udevadm_pid=424677 00:02:47.662 09:21:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.662 09:21:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.662 09:21:43 -- pm/common@17 -- # local monitor 00:02:47.662 09:21:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.662 09:21:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.662 09:21:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.662 09:21:43 -- pm/common@21 -- # date +%s 00:02:47.662 09:21:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.662 09:21:43 -- pm/common@21 -- # date +%s 00:02:47.662 09:21:43 -- pm/common@21 -- # date +%s 00:02:47.662 09:21:43 -- pm/common@25 -- # sleep 1 00:02:47.662 09:21:43 -- pm/common@21 -- # date +%s 00:02:47.662 09:21:43 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285703 00:02:47.662 09:21:43 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285703 00:02:47.662 09:21:43 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285703 00:02:47.662 09:21:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728285703 00:02:47.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285703_collect-cpu-load.pm.log 00:02:47.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285703_collect-cpu-temp.pm.log 00:02:47.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285703_collect-vmstat.pm.log 00:02:47.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728285703_collect-bmc-pm.bmc.pm.log 00:02:48.620 09:21:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.620 09:21:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.620 09:21:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:48.620 09:21:44 -- common/autotest_common.sh@10 -- # set +x 00:02:48.620 09:21:44 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.620 09:21:44 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:48.620 09:21:44 -- common/autotest_common.sh@10 -- # set +x 00:02:48.620 09:21:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:48.620 09:21:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.620 09:21:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.620 09:21:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:48.620 09:21:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:48.620 09:21:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.620 09:21:44 -- common/autotest_common.sh@1455 -- # uname 00:02:48.620 09:21:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:48.620 09:21:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.620 09:21:44 -- common/autotest_common.sh@1475 -- # uname 00:02:48.620 09:21:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:48.620 09:21:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.620 09:21:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:48.879 lcov: LCOV version 1.15 00:02:48.879 09:21:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:57.008 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:04.496 09:21:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:04.496 09:21:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.496 09:21:59 -- common/autotest_common.sh@10 -- # set +x 00:03:04.496 09:21:59 -- spdk/autotest.sh@78 -- # rm -f 00:03:04.496 09:21:59 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.811 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:03:07.811 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:07.811 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:07.811 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:07.811 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:07.811 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:07.811 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.069 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.069 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.069 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.069 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.069 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.069 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.328 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.868 09:22:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:10.868 09:22:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:10.868 09:22:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:10.868 09:22:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:10.868 09:22:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:10.868 09:22:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:10.868 09:22:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:10.868 09:22:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.868 09:22:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:10.868 09:22:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:10.868 09:22:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.868 09:22:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:10.868 09:22:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:10.868 09:22:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:10.868 09:22:05 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.868 No valid GPT data, bailing 00:03:10.868 09:22:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.868 09:22:05 -- scripts/common.sh@394 -- # pt= 00:03:10.868 09:22:05 -- scripts/common.sh@395 -- # return 1 00:03:10.868 09:22:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.868 1+0 records in 00:03:10.868 1+0 records out 00:03:10.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475857 s, 220 MB/s 00:03:10.868 09:22:05 -- spdk/autotest.sh@105 -- # sync 00:03:10.868 09:22:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.868 09:22:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.868 09:22:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.149 09:22:11 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.149 09:22:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.149 09:22:11 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:16.149 09:22:11 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.149 09:22:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.149 09:22:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.149 09:22:11 -- common/autotest_common.sh@10 -- # set +x 00:03:16.149 ************************************ 00:03:16.149 START TEST setup.sh 00:03:16.149 ************************************ 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.150 * Looking for test storage... 00:03:16.150 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1681 -- # lcov --version 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.150 09:22:11 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:16.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.150 --rc genhtml_branch_coverage=1 00:03:16.150 --rc genhtml_function_coverage=1 00:03:16.150 --rc genhtml_legend=1 00:03:16.150 --rc geninfo_all_blocks=1 00:03:16.150 --rc geninfo_unexecuted_blocks=1 00:03:16.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.150 ' 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:16.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.150 --rc genhtml_branch_coverage=1 00:03:16.150 --rc genhtml_function_coverage=1 00:03:16.150 --rc genhtml_legend=1 00:03:16.150 --rc geninfo_all_blocks=1 00:03:16.150 --rc geninfo_unexecuted_blocks=1 00:03:16.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.150 ' 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:16.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.150 --rc genhtml_branch_coverage=1 00:03:16.150 --rc genhtml_function_coverage=1 00:03:16.150 --rc genhtml_legend=1 00:03:16.150 --rc geninfo_all_blocks=1 00:03:16.150 --rc geninfo_unexecuted_blocks=1 00:03:16.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.150 ' 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:16.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.150 --rc genhtml_branch_coverage=1 00:03:16.150 --rc genhtml_function_coverage=1 00:03:16.150 --rc genhtml_legend=1 00:03:16.150 --rc geninfo_all_blocks=1 00:03:16.150 --rc geninfo_unexecuted_blocks=1 00:03:16.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.150 ' 00:03:16.150 09:22:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:16.150 09:22:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.150 09:22:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.150 09:22:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.150 ************************************ 00:03:16.150 START TEST acl 00:03:16.150 ************************************ 00:03:16.150 09:22:11 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:16.150 * Looking for test storage... 00:03:16.150 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:16.150 09:22:11 setup.sh.acl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:16.150 09:22:11 setup.sh.acl -- common/autotest_common.sh@1681 -- # lcov --version 00:03:16.150 09:22:11 setup.sh.acl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.410 09:22:11 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:16.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.410 --rc genhtml_branch_coverage=1 00:03:16.410 --rc genhtml_function_coverage=1 00:03:16.410 --rc genhtml_legend=1 00:03:16.410 --rc geninfo_all_blocks=1 00:03:16.410 --rc geninfo_unexecuted_blocks=1 00:03:16.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.410 ' 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:16.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.410 --rc genhtml_branch_coverage=1 00:03:16.410 --rc genhtml_function_coverage=1 00:03:16.410 --rc genhtml_legend=1 00:03:16.410 --rc geninfo_all_blocks=1 00:03:16.410 --rc geninfo_unexecuted_blocks=1 00:03:16.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.410 ' 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:16.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.410 --rc genhtml_branch_coverage=1 00:03:16.410 --rc genhtml_function_coverage=1 00:03:16.410 --rc genhtml_legend=1 00:03:16.410 --rc geninfo_all_blocks=1 00:03:16.410 --rc geninfo_unexecuted_blocks=1 00:03:16.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.410 ' 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:16.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.410 --rc genhtml_branch_coverage=1 00:03:16.410 --rc genhtml_function_coverage=1 00:03:16.410 --rc genhtml_legend=1 00:03:16.410 --rc geninfo_all_blocks=1 00:03:16.410 --rc geninfo_unexecuted_blocks=1 00:03:16.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:16.410 ' 00:03:16.410 09:22:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.410 09:22:11 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:16.410 09:22:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:16.410 09:22:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:16.410 09:22:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:16.410 09:22:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.410 09:22:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:16.410 09:22:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.410 09:22:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.974 09:22:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.974 09:22:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:22.974 09:22:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.974 09:22:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:22.974 09:22:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.974 09:22:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:25.510 Hugepages 00:03:25.510 node hugesize free / total 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 00:03:25.510 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.510 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.769 09:22:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.769 09:22:21 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.769 09:22:21 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.769 09:22:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.769 ************************************ 00:03:25.769 START TEST denied 00:03:25.769 ************************************ 00:03:25.769 09:22:21 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:25.769 09:22:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:03:25.769 09:22:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.769 09:22:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:03:25.769 09:22:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.769 09:22:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:32.325 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:32.325 09:22:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.326 09:22:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.892 00:03:38.892 real 0m12.347s 00:03:38.892 user 0m3.653s 00:03:38.892 sys 0m7.731s 00:03:38.892 09:22:33 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.892 09:22:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:38.892 ************************************ 00:03:38.892 END TEST denied 00:03:38.892 ************************************ 00:03:38.892 09:22:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:38.892 09:22:33 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.892 09:22:33 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.892 09:22:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:38.892 ************************************ 00:03:38.892 START TEST allowed 00:03:38.892 ************************************ 00:03:38.892 09:22:33 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:38.892 09:22:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:03:38.892 09:22:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:38.892 09:22:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:03:38.892 09:22:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.892 09:22:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:47.004 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.004 09:22:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:47.004 09:22:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:47.004 09:22:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:47.004 09:22:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.004 09:22:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.688 00:03:53.688 real 0m14.377s 00:03:53.688 user 0m3.591s 00:03:53.688 sys 0m7.474s 00:03:53.688 09:22:47 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.688 09:22:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:53.688 ************************************ 00:03:53.688 END TEST allowed 00:03:53.688 ************************************ 00:03:53.688 00:03:53.688 real 0m36.410s 00:03:53.688 user 0m10.454s 00:03:53.688 sys 0m21.802s 00:03:53.688 09:22:48 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.688 09:22:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.688 ************************************ 00:03:53.688 END TEST acl 00:03:53.688 ************************************ 00:03:53.688 09:22:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.688 09:22:48 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.688 09:22:48 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.688 09:22:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.688 ************************************ 00:03:53.688 START TEST hugepages 00:03:53.688 ************************************ 00:03:53.688 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.688 * Looking for test storage... 00:03:53.688 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:53.688 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:53.688 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lcov --version 00:03:53.688 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:53.688 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.688 09:22:48 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.689 09:22:48 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:53.689 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.689 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:53.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.689 --rc genhtml_branch_coverage=1 00:03:53.689 --rc genhtml_function_coverage=1 00:03:53.689 --rc genhtml_legend=1 00:03:53.689 --rc geninfo_all_blocks=1 00:03:53.689 --rc geninfo_unexecuted_blocks=1 00:03:53.689 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.689 ' 00:03:53.689 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:53.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.689 --rc genhtml_branch_coverage=1 00:03:53.689 --rc genhtml_function_coverage=1 00:03:53.689 --rc genhtml_legend=1 00:03:53.689 --rc geninfo_all_blocks=1 00:03:53.689 --rc geninfo_unexecuted_blocks=1 00:03:53.689 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.689 ' 00:03:53.689 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:53.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.689 --rc genhtml_branch_coverage=1 00:03:53.689 --rc genhtml_function_coverage=1 00:03:53.689 --rc genhtml_legend=1 00:03:53.689 --rc geninfo_all_blocks=1 00:03:53.689 --rc geninfo_unexecuted_blocks=1 00:03:53.689 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.689 ' 00:03:53.689 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:53.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.689 --rc genhtml_branch_coverage=1 00:03:53.689 --rc genhtml_function_coverage=1 00:03:53.689 --rc genhtml_legend=1 00:03:53.689 --rc geninfo_all_blocks=1 00:03:53.689 --rc geninfo_unexecuted_blocks=1 00:03:53.689 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.689 ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 72700300 kB' 'MemAvailable: 76481664 kB' 'Buffers: 9772 kB' 'Cached: 12014556 kB' 'SwapCached: 0 kB' 'Active: 8755204 kB' 'Inactive: 3781292 kB' 'Active(anon): 8332992 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515496 kB' 'Mapped: 193424 kB' 'Shmem: 7820824 kB' 'KReclaimable: 413780 kB' 'Slab: 1001452 kB' 'SReclaimable: 413780 kB' 'SUnreclaim: 587672 kB' 'KernelStack: 17632 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434172 kB' 'Committed_AS: 9716104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212180 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.689 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.690 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:53.691 09:22:48 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:53.691 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.691 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.691 09:22:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.691 ************************************ 00:03:53.691 START TEST single_node_setup 00:03:53.691 ************************************ 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.691 09:22:48 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:56.222 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.222 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.223 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.223 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.223 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.223 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.223 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:59.509 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74876620 kB' 'MemAvailable: 78657864 kB' 'Buffers: 9772 kB' 'Cached: 12014740 kB' 'SwapCached: 0 kB' 'Active: 8756224 kB' 'Inactive: 3781292 kB' 'Active(anon): 8334012 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516348 kB' 'Mapped: 193428 kB' 'Shmem: 7821008 kB' 'KReclaimable: 413660 kB' 'Slab: 999988 kB' 'SReclaimable: 413660 kB' 'SUnreclaim: 586328 kB' 'KernelStack: 17456 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9715724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212116 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74877696 kB' 'MemAvailable: 78658932 kB' 'Buffers: 9772 kB' 'Cached: 12014740 kB' 'SwapCached: 0 kB' 'Active: 8755708 kB' 'Inactive: 3781292 kB' 'Active(anon): 8333496 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515848 kB' 'Mapped: 193364 kB' 'Shmem: 7821008 kB' 'KReclaimable: 413652 kB' 'Slab: 999980 kB' 'SReclaimable: 413652 kB' 'SUnreclaim: 586328 kB' 'KernelStack: 17440 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9715744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212116 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.419 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74877460 kB' 'MemAvailable: 78658696 kB' 'Buffers: 9772 kB' 'Cached: 12014740 kB' 'SwapCached: 0 kB' 'Active: 8755872 kB' 'Inactive: 3781292 kB' 'Active(anon): 8333660 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516004 kB' 'Mapped: 193364 kB' 'Shmem: 7821008 kB' 'KReclaimable: 413652 kB' 'Slab: 999980 kB' 'SReclaimable: 413652 kB' 'SUnreclaim: 586328 kB' 'KernelStack: 17440 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9715764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212116 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.421 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.422 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:01.423 nr_hugepages=1024 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:01.423 resv_hugepages=0 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:01.423 surplus_hugepages=0 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:01.423 anon_hugepages=0 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.423 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74877580 kB' 'MemAvailable: 78658816 kB' 'Buffers: 9772 kB' 'Cached: 12014780 kB' 'SwapCached: 0 kB' 'Active: 8756340 kB' 'Inactive: 3781292 kB' 'Active(anon): 8334128 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516448 kB' 'Mapped: 193364 kB' 'Shmem: 7821048 kB' 'KReclaimable: 413652 kB' 'Slab: 999980 kB' 'SReclaimable: 413652 kB' 'SUnreclaim: 586328 kB' 'KernelStack: 17488 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9715788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212116 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.424 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.425 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34527500 kB' 'MemUsed: 13537364 kB' 'SwapCached: 0 kB' 'Active: 6240952 kB' 'Inactive: 3659820 kB' 'Active(anon): 6030720 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3659820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9752864 kB' 'Mapped: 116680 kB' 'AnonPages: 151072 kB' 'Shmem: 5882812 kB' 'KernelStack: 10312 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135860 kB' 'Slab: 418248 kB' 'SReclaimable: 135860 kB' 'SUnreclaim: 282388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.426 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:01.427 node0=1024 expecting 1024 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.427 00:04:01.427 real 0m8.506s 00:04:01.427 user 0m1.711s 00:04:01.427 sys 0m3.646s 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.427 09:22:56 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:01.427 ************************************ 00:04:01.427 END TEST single_node_setup 00:04:01.427 ************************************ 00:04:01.427 09:22:56 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:01.427 09:22:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.427 09:22:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.427 09:22:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.427 ************************************ 00:04:01.427 START TEST even_2G_alloc 00:04:01.427 ************************************ 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.427 09:22:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:04.712 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.712 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.712 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74886096 kB' 'MemAvailable: 78667308 kB' 'Buffers: 9772 kB' 'Cached: 12014928 kB' 'SwapCached: 0 kB' 'Active: 8754760 kB' 'Inactive: 3781292 kB' 'Active(anon): 8332548 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514704 kB' 'Mapped: 192548 kB' 'Shmem: 7821196 kB' 'KReclaimable: 413628 kB' 'Slab: 999928 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 586300 kB' 'KernelStack: 17440 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9705596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212020 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.250 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.251 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74886092 kB' 'MemAvailable: 78667304 kB' 'Buffers: 9772 kB' 'Cached: 12014932 kB' 'SwapCached: 0 kB' 'Active: 8754464 kB' 'Inactive: 3781292 kB' 'Active(anon): 8332252 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514392 kB' 'Mapped: 192528 kB' 'Shmem: 7821200 kB' 'KReclaimable: 413628 kB' 'Slab: 999936 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 586308 kB' 'KernelStack: 17424 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9705612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212020 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.252 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.253 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74885588 kB' 'MemAvailable: 78666800 kB' 'Buffers: 9772 kB' 'Cached: 12014952 kB' 'SwapCached: 0 kB' 'Active: 8754496 kB' 'Inactive: 3781292 kB' 'Active(anon): 8332284 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514392 kB' 'Mapped: 192528 kB' 'Shmem: 7821220 kB' 'KReclaimable: 413628 kB' 'Slab: 999936 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 586308 kB' 'KernelStack: 17424 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9705636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212020 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.254 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.255 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:07.256 nr_hugepages=1024 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:07.256 resv_hugepages=0 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:07.256 surplus_hugepages=0 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:07.256 anon_hugepages=0 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.256 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74885336 kB' 'MemAvailable: 78666548 kB' 'Buffers: 9772 kB' 'Cached: 12014972 kB' 'SwapCached: 0 kB' 'Active: 8754916 kB' 'Inactive: 3781292 kB' 'Active(anon): 8332704 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514856 kB' 'Mapped: 192528 kB' 'Shmem: 7821240 kB' 'KReclaimable: 413628 kB' 'Slab: 999936 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 586308 kB' 'KernelStack: 17456 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9705656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212052 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.257 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.258 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35570408 kB' 'MemUsed: 12494456 kB' 'SwapCached: 0 kB' 'Active: 6239568 kB' 'Inactive: 3659820 kB' 'Active(anon): 6029336 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3659820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753008 kB' 'Mapped: 116252 kB' 'AnonPages: 149528 kB' 'Shmem: 5882956 kB' 'KernelStack: 10280 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135836 kB' 'Slab: 418308 kB' 'SReclaimable: 135836 kB' 'SUnreclaim: 282472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.259 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 39305504 kB' 'MemUsed: 4915076 kB' 'SwapCached: 0 kB' 'Active: 2519304 kB' 'Inactive: 121472 kB' 'Active(anon): 2307324 kB' 'Inactive(anon): 0 kB' 'Active(file): 211980 kB' 'Inactive(file): 121472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2271760 kB' 'Mapped: 76780 kB' 'AnonPages: 369248 kB' 'Shmem: 1938308 kB' 'KernelStack: 7160 kB' 'PageTables: 4708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 277792 kB' 'Slab: 581628 kB' 'SReclaimable: 277792 kB' 'SUnreclaim: 303836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.260 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.261 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:07.262 node0=512 expecting 512 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:07.262 node1=512 expecting 512 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:07.262 00:04:07.262 real 0m5.688s 00:04:07.262 user 0m2.059s 00:04:07.262 sys 0m3.641s 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.262 09:23:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.262 ************************************ 00:04:07.262 END TEST even_2G_alloc 00:04:07.262 ************************************ 00:04:07.262 09:23:02 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:07.262 09:23:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.262 09:23:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.262 09:23:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.262 ************************************ 00:04:07.262 START TEST odd_alloc 00:04:07.262 ************************************ 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.262 09:23:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:10.548 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.548 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.548 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74909260 kB' 'MemAvailable: 78690472 kB' 'Buffers: 9772 kB' 'Cached: 12015124 kB' 'SwapCached: 0 kB' 'Active: 8756188 kB' 'Inactive: 3781292 kB' 'Active(anon): 8333976 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515456 kB' 'Mapped: 192744 kB' 'Shmem: 7821392 kB' 'KReclaimable: 413628 kB' 'Slab: 999520 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 585892 kB' 'KernelStack: 17440 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9706436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212196 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.450 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74909360 kB' 'MemAvailable: 78690572 kB' 'Buffers: 9772 kB' 'Cached: 12015128 kB' 'SwapCached: 0 kB' 'Active: 8756092 kB' 'Inactive: 3781292 kB' 'Active(anon): 8333880 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515808 kB' 'Mapped: 192668 kB' 'Shmem: 7821396 kB' 'KReclaimable: 413628 kB' 'Slab: 999512 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 585884 kB' 'KernelStack: 17440 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9706452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212164 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74909876 kB' 'MemAvailable: 78691088 kB' 'Buffers: 9772 kB' 'Cached: 12015144 kB' 'SwapCached: 0 kB' 'Active: 8755628 kB' 'Inactive: 3781292 kB' 'Active(anon): 8333416 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515280 kB' 'Mapped: 192668 kB' 'Shmem: 7821412 kB' 'KReclaimable: 413628 kB' 'Slab: 999512 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 585884 kB' 'KernelStack: 17424 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9706472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212148 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.717 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:12.719 nr_hugepages=1025 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:12.719 resv_hugepages=0 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:12.719 surplus_hugepages=0 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:12.719 anon_hugepages=0 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74910552 kB' 'MemAvailable: 78691764 kB' 'Buffers: 9772 kB' 'Cached: 12015144 kB' 'SwapCached: 0 kB' 'Active: 8756132 kB' 'Inactive: 3781292 kB' 'Active(anon): 8333920 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515760 kB' 'Mapped: 192668 kB' 'Shmem: 7821412 kB' 'KReclaimable: 413628 kB' 'Slab: 999508 kB' 'SReclaimable: 413628 kB' 'SUnreclaim: 585880 kB' 'KernelStack: 17424 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9707452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212164 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35574236 kB' 'MemUsed: 12490628 kB' 'SwapCached: 0 kB' 'Active: 6240572 kB' 'Inactive: 3659820 kB' 'Active(anon): 6030340 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3659820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753108 kB' 'Mapped: 116388 kB' 'AnonPages: 150396 kB' 'Shmem: 5883056 kB' 'KernelStack: 10248 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135836 kB' 'Slab: 417864 kB' 'SReclaimable: 135836 kB' 'SUnreclaim: 282028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 39328004 kB' 'MemUsed: 4892576 kB' 'SwapCached: 0 kB' 'Active: 2520840 kB' 'Inactive: 121472 kB' 'Active(anon): 2308860 kB' 'Inactive(anon): 0 kB' 'Active(file): 211980 kB' 'Inactive(file): 121472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2271812 kB' 'Mapped: 76784 kB' 'AnonPages: 371144 kB' 'Shmem: 1938360 kB' 'KernelStack: 7176 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 277792 kB' 'Slab: 581644 kB' 'SReclaimable: 277792 kB' 'SUnreclaim: 303852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.722 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.723 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:12.724 node0=513 expecting 513 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:12.724 node1=512 expecting 512 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:12.724 00:04:12.724 real 0m5.411s 00:04:12.724 user 0m1.911s 00:04:12.724 sys 0m3.515s 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.724 09:23:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.724 ************************************ 00:04:12.724 END TEST odd_alloc 00:04:12.724 ************************************ 00:04:12.724 09:23:08 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:12.724 09:23:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.724 09:23:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.724 09:23:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.724 ************************************ 00:04:12.724 START TEST custom_alloc 00:04:12.724 ************************************ 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.724 09:23:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:16.008 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.008 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.008 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73837788 kB' 'MemAvailable: 77619032 kB' 'Buffers: 9772 kB' 'Cached: 12015328 kB' 'SwapCached: 0 kB' 'Active: 8757224 kB' 'Inactive: 3781292 kB' 'Active(anon): 8335012 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516708 kB' 'Mapped: 192776 kB' 'Shmem: 7821596 kB' 'KReclaimable: 413660 kB' 'Slab: 999800 kB' 'SReclaimable: 413660 kB' 'SUnreclaim: 586140 kB' 'KernelStack: 17648 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9709788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212356 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.554 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.555 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73838972 kB' 'MemAvailable: 77620216 kB' 'Buffers: 9772 kB' 'Cached: 12015332 kB' 'SwapCached: 0 kB' 'Active: 8757168 kB' 'Inactive: 3781292 kB' 'Active(anon): 8334956 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516632 kB' 'Mapped: 192684 kB' 'Shmem: 7821600 kB' 'KReclaimable: 413660 kB' 'Slab: 999688 kB' 'SReclaimable: 413660 kB' 'SUnreclaim: 586028 kB' 'KernelStack: 17488 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9709804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212260 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.556 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.557 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73839028 kB' 'MemAvailable: 77620272 kB' 'Buffers: 9772 kB' 'Cached: 12015332 kB' 'SwapCached: 0 kB' 'Active: 8757376 kB' 'Inactive: 3781292 kB' 'Active(anon): 8335164 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516824 kB' 'Mapped: 192684 kB' 'Shmem: 7821600 kB' 'KReclaimable: 413660 kB' 'Slab: 999688 kB' 'SReclaimable: 413660 kB' 'SUnreclaim: 586028 kB' 'KernelStack: 17648 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9708324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212244 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.558 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.559 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:18.560 nr_hugepages=1536 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:18.560 resv_hugepages=0 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:18.560 surplus_hugepages=0 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:18.560 anon_hugepages=0 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73840220 kB' 'MemAvailable: 77621464 kB' 'Buffers: 9772 kB' 'Cached: 12015372 kB' 'SwapCached: 0 kB' 'Active: 8757048 kB' 'Inactive: 3781292 kB' 'Active(anon): 8334836 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516520 kB' 'Mapped: 192684 kB' 'Shmem: 7821640 kB' 'KReclaimable: 413660 kB' 'Slab: 999592 kB' 'SReclaimable: 413660 kB' 'SUnreclaim: 585932 kB' 'KernelStack: 17472 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9709848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212260 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.560 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.561 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35580980 kB' 'MemUsed: 12483884 kB' 'SwapCached: 0 kB' 'Active: 6240504 kB' 'Inactive: 3659820 kB' 'Active(anon): 6030272 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3659820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753296 kB' 'Mapped: 116404 kB' 'AnonPages: 150300 kB' 'Shmem: 5883244 kB' 'KernelStack: 10232 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135836 kB' 'Slab: 417952 kB' 'SReclaimable: 135836 kB' 'SUnreclaim: 282116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.562 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.563 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 38250668 kB' 'MemUsed: 5969912 kB' 'SwapCached: 0 kB' 'Active: 2522188 kB' 'Inactive: 121472 kB' 'Active(anon): 2310208 kB' 'Inactive(anon): 0 kB' 'Active(file): 211980 kB' 'Inactive(file): 121472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2271868 kB' 'Mapped: 76980 kB' 'AnonPages: 371872 kB' 'Shmem: 1938416 kB' 'KernelStack: 7144 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 277824 kB' 'Slab: 581576 kB' 'SReclaimable: 277824 kB' 'SUnreclaim: 303752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.564 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:18.565 node0=512 expecting 512 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:18.565 node1=1024 expecting 1024 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:18.565 00:04:18.565 real 0m5.667s 00:04:18.565 user 0m2.143s 00:04:18.565 sys 0m3.543s 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.565 09:23:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.565 ************************************ 00:04:18.565 END TEST custom_alloc 00:04:18.565 ************************************ 00:04:18.565 09:23:13 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.565 09:23:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.565 09:23:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.565 09:23:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.565 ************************************ 00:04:18.565 START TEST no_shrink_alloc 00:04:18.565 ************************************ 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.565 09:23:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:21.854 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.854 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.854 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74886260 kB' 'MemAvailable: 78667536 kB' 'Buffers: 9772 kB' 'Cached: 12015528 kB' 'SwapCached: 0 kB' 'Active: 8757880 kB' 'Inactive: 3781292 kB' 'Active(anon): 8335668 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517160 kB' 'Mapped: 192872 kB' 'Shmem: 7821796 kB' 'KReclaimable: 413692 kB' 'Slab: 1000568 kB' 'SReclaimable: 413692 kB' 'SUnreclaim: 586876 kB' 'KernelStack: 17360 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9707628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212116 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.410 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.411 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.412 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74886904 kB' 'MemAvailable: 78668180 kB' 'Buffers: 9772 kB' 'Cached: 12015532 kB' 'SwapCached: 0 kB' 'Active: 8757816 kB' 'Inactive: 3781292 kB' 'Active(anon): 8335604 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517136 kB' 'Mapped: 192756 kB' 'Shmem: 7821800 kB' 'KReclaimable: 413692 kB' 'Slab: 1000632 kB' 'SReclaimable: 413692 kB' 'SUnreclaim: 586940 kB' 'KernelStack: 17360 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9707652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212084 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.413 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.414 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74886904 kB' 'MemAvailable: 78668180 kB' 'Buffers: 9772 kB' 'Cached: 12015548 kB' 'SwapCached: 0 kB' 'Active: 8758244 kB' 'Inactive: 3781292 kB' 'Active(anon): 8336032 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517508 kB' 'Mapped: 192756 kB' 'Shmem: 7821816 kB' 'KReclaimable: 413692 kB' 'Slab: 1000632 kB' 'SReclaimable: 413692 kB' 'SUnreclaim: 586940 kB' 'KernelStack: 17376 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9707676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212084 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.415 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.416 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:24.417 nr_hugepages=1024 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:24.417 resv_hugepages=0 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:24.417 surplus_hugepages=0 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:24.417 anon_hugepages=0 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.417 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74886352 kB' 'MemAvailable: 78667628 kB' 'Buffers: 9772 kB' 'Cached: 12015596 kB' 'SwapCached: 0 kB' 'Active: 8758128 kB' 'Inactive: 3781292 kB' 'Active(anon): 8335916 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517340 kB' 'Mapped: 192756 kB' 'Shmem: 7821864 kB' 'KReclaimable: 413692 kB' 'Slab: 1000624 kB' 'SReclaimable: 413692 kB' 'SUnreclaim: 586932 kB' 'KernelStack: 17392 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9708068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212100 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34522520 kB' 'MemUsed: 13542344 kB' 'SwapCached: 0 kB' 'Active: 6241416 kB' 'Inactive: 3659820 kB' 'Active(anon): 6031184 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3659820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753448 kB' 'Mapped: 116476 kB' 'AnonPages: 150960 kB' 'Shmem: 5883396 kB' 'KernelStack: 10232 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135836 kB' 'Slab: 418248 kB' 'SReclaimable: 135836 kB' 'SUnreclaim: 282412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.420 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.421 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:24.422 node0=1024 expecting 1024 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.422 09:23:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:27.701 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.701 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.701 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.609 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74885076 kB' 'MemAvailable: 78666368 kB' 'Buffers: 9772 kB' 'Cached: 12015708 kB' 'SwapCached: 0 kB' 'Active: 8759328 kB' 'Inactive: 3781292 kB' 'Active(anon): 8337116 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518348 kB' 'Mapped: 192884 kB' 'Shmem: 7821976 kB' 'KReclaimable: 413708 kB' 'Slab: 1000444 kB' 'SReclaimable: 413708 kB' 'SUnreclaim: 586736 kB' 'KernelStack: 17376 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9708688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212260 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74884572 kB' 'MemAvailable: 78665864 kB' 'Buffers: 9772 kB' 'Cached: 12015712 kB' 'SwapCached: 0 kB' 'Active: 8759248 kB' 'Inactive: 3781292 kB' 'Active(anon): 8337036 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518252 kB' 'Mapped: 192816 kB' 'Shmem: 7821980 kB' 'KReclaimable: 413708 kB' 'Slab: 1000444 kB' 'SReclaimable: 413708 kB' 'SUnreclaim: 586736 kB' 'KernelStack: 17392 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9708708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212260 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.611 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.613 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74884724 kB' 'MemAvailable: 78666016 kB' 'Buffers: 9772 kB' 'Cached: 12015728 kB' 'SwapCached: 0 kB' 'Active: 8759268 kB' 'Inactive: 3781292 kB' 'Active(anon): 8337056 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518300 kB' 'Mapped: 192816 kB' 'Shmem: 7821996 kB' 'KReclaimable: 413708 kB' 'Slab: 1000424 kB' 'SReclaimable: 413708 kB' 'SUnreclaim: 586716 kB' 'KernelStack: 17408 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9708728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212244 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.614 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.615 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:29.616 nr_hugepages=1024 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:29.616 resv_hugepages=0 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:29.616 surplus_hugepages=0 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:29.616 anon_hugepages=0 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74885092 kB' 'MemAvailable: 78666384 kB' 'Buffers: 9772 kB' 'Cached: 12015732 kB' 'SwapCached: 0 kB' 'Active: 8758912 kB' 'Inactive: 3781292 kB' 'Active(anon): 8336700 kB' 'Inactive(anon): 0 kB' 'Active(file): 422212 kB' 'Inactive(file): 3781292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517944 kB' 'Mapped: 192816 kB' 'Shmem: 7822000 kB' 'KReclaimable: 413708 kB' 'Slab: 1000424 kB' 'SReclaimable: 413708 kB' 'SUnreclaim: 586716 kB' 'KernelStack: 17392 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9708752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212244 kB' 'VmallocChunk: 0 kB' 'Percpu: 59616 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.616 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.617 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.618 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34518604 kB' 'MemUsed: 13546260 kB' 'SwapCached: 0 kB' 'Active: 6241888 kB' 'Inactive: 3659820 kB' 'Active(anon): 6031656 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3659820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753576 kB' 'Mapped: 116536 kB' 'AnonPages: 151324 kB' 'Shmem: 5883524 kB' 'KernelStack: 10248 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135820 kB' 'Slab: 417884 kB' 'SReclaimable: 135820 kB' 'SUnreclaim: 282064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.619 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:29.620 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:29.621 node0=1024 expecting 1024 00:04:29.621 09:23:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.621 00:04:29.621 real 0m11.041s 00:04:29.621 user 0m3.992s 00:04:29.621 sys 0m7.090s 00:04:29.621 09:23:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.621 09:23:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.621 ************************************ 00:04:29.621 END TEST no_shrink_alloc 00:04:29.621 ************************************ 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:29.621 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:29.621 00:04:29.621 real 0m36.924s 00:04:29.621 user 0m12.073s 00:04:29.621 sys 0m21.842s 00:04:29.621 09:23:25 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.621 09:23:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.621 ************************************ 00:04:29.621 END TEST hugepages 00:04:29.621 ************************************ 00:04:29.621 09:23:25 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:29.621 09:23:25 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.621 09:23:25 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.621 09:23:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.621 ************************************ 00:04:29.621 START TEST driver 00:04:29.621 ************************************ 00:04:29.621 09:23:25 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:29.881 * Looking for test storage... 00:04:29.881 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1681 -- # lcov --version 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.881 09:23:25 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:29.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.881 --rc genhtml_branch_coverage=1 00:04:29.881 --rc genhtml_function_coverage=1 00:04:29.881 --rc genhtml_legend=1 00:04:29.881 --rc geninfo_all_blocks=1 00:04:29.881 --rc geninfo_unexecuted_blocks=1 00:04:29.881 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.881 ' 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:29.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.881 --rc genhtml_branch_coverage=1 00:04:29.881 --rc genhtml_function_coverage=1 00:04:29.881 --rc genhtml_legend=1 00:04:29.881 --rc geninfo_all_blocks=1 00:04:29.881 --rc geninfo_unexecuted_blocks=1 00:04:29.881 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.881 ' 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:29.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.881 --rc genhtml_branch_coverage=1 00:04:29.881 --rc genhtml_function_coverage=1 00:04:29.881 --rc genhtml_legend=1 00:04:29.881 --rc geninfo_all_blocks=1 00:04:29.881 --rc geninfo_unexecuted_blocks=1 00:04:29.881 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.881 ' 00:04:29.881 09:23:25 setup.sh.driver -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:29.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.881 --rc genhtml_branch_coverage=1 00:04:29.881 --rc genhtml_function_coverage=1 00:04:29.881 --rc genhtml_legend=1 00:04:29.881 --rc geninfo_all_blocks=1 00:04:29.881 --rc geninfo_unexecuted_blocks=1 00:04:29.881 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:29.881 ' 00:04:29.881 09:23:25 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:29.881 09:23:25 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.881 09:23:25 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.448 09:23:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.448 09:23:31 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.448 09:23:31 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.448 09:23:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.448 ************************************ 00:04:36.448 START TEST guess_driver 00:04:36.448 ************************************ 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:36.448 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:36.448 Looking for driver=vfio-pci 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.448 09:23:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.736 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.995 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.995 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.995 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.995 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.995 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.995 09:23:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.282 09:23:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.282 09:23:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.282 09:23:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.182 09:23:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:45.183 09:23:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:45.183 09:23:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.183 09:23:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.290 00:04:53.290 real 0m15.559s 00:04:53.290 user 0m3.909s 00:04:53.290 sys 0m7.775s 00:04:53.290 09:23:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.290 09:23:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.290 ************************************ 00:04:53.290 END TEST guess_driver 00:04:53.290 ************************************ 00:04:53.290 00:04:53.290 real 0m22.347s 00:04:53.290 user 0m5.803s 00:04:53.290 sys 0m11.858s 00:04:53.290 09:23:47 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.290 09:23:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.290 ************************************ 00:04:53.290 END TEST driver 00:04:53.290 ************************************ 00:04:53.290 09:23:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:53.290 09:23:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.290 09:23:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.290 09:23:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.290 ************************************ 00:04:53.290 START TEST devices 00:04:53.290 ************************************ 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:53.290 * Looking for test storage... 00:04:53.290 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.290 09:23:47 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.290 --rc genhtml_branch_coverage=1 00:04:53.290 --rc genhtml_function_coverage=1 00:04:53.290 --rc genhtml_legend=1 00:04:53.290 --rc geninfo_all_blocks=1 00:04:53.290 --rc geninfo_unexecuted_blocks=1 00:04:53.290 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.290 ' 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.290 --rc genhtml_branch_coverage=1 00:04:53.290 --rc genhtml_function_coverage=1 00:04:53.290 --rc genhtml_legend=1 00:04:53.290 --rc geninfo_all_blocks=1 00:04:53.290 --rc geninfo_unexecuted_blocks=1 00:04:53.290 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.290 ' 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.290 --rc genhtml_branch_coverage=1 00:04:53.290 --rc genhtml_function_coverage=1 00:04:53.290 --rc genhtml_legend=1 00:04:53.290 --rc geninfo_all_blocks=1 00:04:53.290 --rc geninfo_unexecuted_blocks=1 00:04:53.290 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.290 ' 00:04:53.290 09:23:47 setup.sh.devices -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.290 --rc genhtml_branch_coverage=1 00:04:53.290 --rc genhtml_function_coverage=1 00:04:53.290 --rc genhtml_legend=1 00:04:53.290 --rc geninfo_all_blocks=1 00:04:53.290 --rc geninfo_unexecuted_blocks=1 00:04:53.290 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.290 ' 00:04:53.290 09:23:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.290 09:23:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:53.290 09:23:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.290 09:23:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.562 09:23:53 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.562 09:23:53 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:58.562 09:23:53 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:58.562 09:23:53 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:58.562 09:23:53 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:58.562 09:23:53 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:58.563 09:23:53 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:58.563 09:23:53 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:58.563 No valid GPT data, bailing 00:04:58.563 09:23:53 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.563 09:23:53 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:58.563 09:23:53 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:58.563 09:23:53 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:58.563 09:23:53 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:58.563 09:23:53 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:58.563 09:23:53 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:58.563 09:23:53 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.563 09:23:53 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.563 09:23:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.563 ************************************ 00:04:58.563 START TEST nvme_mount 00:04:58.563 ************************************ 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.563 09:23:53 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:59.176 Creating new GPT entries in memory. 00:04:59.176 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.176 other utilities. 00:04:59.176 09:23:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.176 09:23:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.176 09:23:54 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.176 09:23:54 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.176 09:23:54 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.178 Creating new GPT entries in memory. 00:05:00.178 The operation has completed successfully. 00:05:00.178 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.178 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.178 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 456941 00:05:00.178 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.178 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.179 09:23:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:03.460 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:05.359 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.359 09:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.618 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:05.618 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:05.618 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.618 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.618 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:05.618 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:05.618 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.618 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:05.618 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.876 09:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:09.163 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:09.164 09:24:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.695 09:24:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:14.979 09:24:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.884 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.885 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.885 00:05:16.885 real 0m18.998s 00:05:16.885 user 0m5.584s 00:05:16.885 sys 0m10.974s 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.885 09:24:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.885 ************************************ 00:05:16.885 END TEST nvme_mount 00:05:16.885 ************************************ 00:05:16.885 09:24:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:16.885 09:24:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.885 09:24:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.885 09:24:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.144 ************************************ 00:05:17.144 START TEST dm_mount 00:05:17.144 ************************************ 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.144 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.081 Creating new GPT entries in memory. 00:05:18.081 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.081 other utilities. 00:05:18.081 09:24:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.081 09:24:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.081 09:24:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.081 09:24:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.081 09:24:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.020 Creating new GPT entries in memory. 00:05:19.020 The operation has completed successfully. 00:05:19.020 09:24:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.020 09:24:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.020 09:24:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.020 09:24:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.020 09:24:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:20.399 The operation has completed successfully. 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 462499 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.399 09:24:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:23.691 09:24:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.595 09:24:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:28.874 09:24:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:30.776 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:30.776 00:05:30.776 real 0m13.656s 00:05:30.776 user 0m3.379s 00:05:30.776 sys 0m7.083s 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.776 09:24:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:30.776 ************************************ 00:05:30.776 END TEST dm_mount 00:05:30.776 ************************************ 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.776 09:24:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.035 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:31.035 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:31.035 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.035 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.035 09:24:26 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:31.035 00:05:31.035 real 0m38.935s 00:05:31.035 user 0m10.855s 00:05:31.035 sys 0m22.214s 00:05:31.035 09:24:26 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.035 09:24:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.035 ************************************ 00:05:31.035 END TEST devices 00:05:31.035 ************************************ 00:05:31.035 00:05:31.035 real 2m15.138s 00:05:31.035 user 0m39.401s 00:05:31.035 sys 1m18.065s 00:05:31.035 09:24:26 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.035 09:24:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.035 ************************************ 00:05:31.035 END TEST setup.sh 00:05:31.035 ************************************ 00:05:31.035 09:24:26 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:34.331 Hugepages 00:05:34.331 node hugesize free / total 00:05:34.331 node0 1048576kB 0 / 0 00:05:34.331 node0 2048kB 1024 / 1024 00:05:34.331 node1 1048576kB 0 / 0 00:05:34.331 node1 2048kB 1024 / 1024 00:05:34.331 00:05:34.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.331 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:34.331 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:34.331 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:34.331 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:34.331 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:34.331 09:24:29 -- spdk/autotest.sh@117 -- # uname -s 00:05:34.331 09:24:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:34.331 09:24:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:34.331 09:24:29 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:37.618 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:37.618 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:40.906 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.440 09:24:38 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:44.007 09:24:39 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:44.007 09:24:39 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:44.007 09:24:39 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:44.007 09:24:39 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:44.007 09:24:39 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:44.007 09:24:39 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:44.007 09:24:39 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.007 09:24:39 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.007 09:24:39 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:44.007 09:24:39 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:44.007 09:24:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:05:44.007 09:24:39 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.296 Waiting for block devices as requested 00:05:47.296 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:05:47.556 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:47.556 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:47.556 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:47.816 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:47.816 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:47.816 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:48.076 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.076 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:48.076 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:48.335 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:48.335 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:48.335 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:48.595 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:48.595 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:48.595 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.854 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:50.762 09:24:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:50.762 09:24:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:05:50.762 09:24:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:50.762 09:24:46 -- common/autotest_common.sh@1485 -- # grep 0000:1a:00.0/nvme/nvme 00:05:50.762 09:24:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:50.762 09:24:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:05:50.762 09:24:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:05:51.022 09:24:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:51.022 09:24:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:51.022 09:24:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:51.022 09:24:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:51.022 09:24:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:51.022 09:24:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:51.022 09:24:46 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:05:51.022 09:24:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:51.022 09:24:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:51.022 09:24:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:51.022 09:24:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:51.022 09:24:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:51.022 09:24:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:51.022 09:24:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:51.022 09:24:46 -- common/autotest_common.sh@1541 -- # continue 00:05:51.022 09:24:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:51.022 09:24:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.022 09:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.022 09:24:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:51.022 09:24:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.022 09:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.022 09:24:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:54.310 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:54.310 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:57.598 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:59.670 09:24:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:59.670 09:24:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.670 09:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.670 09:24:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:59.670 09:24:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:59.670 09:24:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:59.670 09:24:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:59.670 09:24:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:59.670 09:24:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:59.670 09:24:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:59.670 09:24:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:59.670 09:24:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:59.670 09:24:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:59.670 09:24:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.670 09:24:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:59.670 09:24:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:59.670 09:24:54 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:59.671 09:24:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:05:59.671 09:24:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:59.671 09:24:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:05:59.671 09:24:54 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:59.671 09:24:54 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:59.671 09:24:54 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:59.671 09:24:54 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:59.671 09:24:54 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:05:59.671 09:24:54 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:05:59.671 09:24:54 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=472446 00:05:59.671 09:24:54 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.671 09:24:54 -- common/autotest_common.sh@1583 -- # waitforlisten 472446 00:05:59.671 09:24:54 -- common/autotest_common.sh@831 -- # '[' -z 472446 ']' 00:05:59.671 09:24:54 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.671 09:24:54 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.671 09:24:54 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.671 09:24:54 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.671 09:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.671 [2024-10-07 09:24:55.003303] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:59.671 [2024-10-07 09:24:55.003377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472446 ] 00:05:59.671 [2024-10-07 09:24:55.078504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.671 [2024-10-07 09:24:55.165512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.633 09:24:55 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.633 09:24:55 -- common/autotest_common.sh@864 -- # return 0 00:06:00.633 09:24:55 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:00.633 09:24:55 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:00.633 09:24:55 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:06:03.923 nvme0n1 00:06:03.923 09:24:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:03.923 [2024-10-07 09:24:59.073124] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:03.923 request: 00:06:03.923 { 00:06:03.923 "nvme_ctrlr_name": "nvme0", 00:06:03.923 "password": "test", 00:06:03.923 "method": "bdev_nvme_opal_revert", 00:06:03.923 "req_id": 1 00:06:03.923 } 00:06:03.923 Got JSON-RPC error response 00:06:03.923 response: 00:06:03.923 { 00:06:03.923 "code": -32602, 00:06:03.923 "message": "Invalid parameters" 00:06:03.923 } 00:06:03.923 09:24:59 -- common/autotest_common.sh@1589 -- # true 00:06:03.923 09:24:59 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:03.923 09:24:59 -- common/autotest_common.sh@1593 -- # killprocess 472446 00:06:03.923 09:24:59 -- common/autotest_common.sh@950 -- # '[' -z 472446 ']' 00:06:03.923 09:24:59 -- common/autotest_common.sh@954 -- # kill -0 472446 00:06:03.923 09:24:59 -- common/autotest_common.sh@955 -- # uname 00:06:03.923 09:24:59 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.923 09:24:59 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472446 00:06:03.923 09:24:59 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.923 09:24:59 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.923 09:24:59 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472446' 00:06:03.923 killing process with pid 472446 00:06:03.923 09:24:59 -- common/autotest_common.sh@969 -- # kill 472446 00:06:03.923 09:24:59 -- common/autotest_common.sh@974 -- # wait 472446 00:06:08.118 09:25:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:08.118 09:25:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:08.118 09:25:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.118 09:25:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.118 09:25:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:08.118 09:25:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.118 09:25:03 -- common/autotest_common.sh@10 -- # set +x 00:06:08.118 09:25:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:08.118 09:25:03 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:08.118 09:25:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.118 09:25:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.118 09:25:03 -- common/autotest_common.sh@10 -- # set +x 00:06:08.118 ************************************ 00:06:08.118 START TEST env 00:06:08.118 ************************************ 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:08.118 * Looking for test storage... 00:06:08.118 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.118 09:25:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.118 09:25:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.118 09:25:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.118 09:25:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.118 09:25:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.118 09:25:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.118 09:25:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.118 09:25:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.118 09:25:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.118 09:25:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.118 09:25:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.118 09:25:03 env -- scripts/common.sh@344 -- # case "$op" in 00:06:08.118 09:25:03 env -- scripts/common.sh@345 -- # : 1 00:06:08.118 09:25:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.118 09:25:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.118 09:25:03 env -- scripts/common.sh@365 -- # decimal 1 00:06:08.118 09:25:03 env -- scripts/common.sh@353 -- # local d=1 00:06:08.118 09:25:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.118 09:25:03 env -- scripts/common.sh@355 -- # echo 1 00:06:08.118 09:25:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.118 09:25:03 env -- scripts/common.sh@366 -- # decimal 2 00:06:08.118 09:25:03 env -- scripts/common.sh@353 -- # local d=2 00:06:08.118 09:25:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.118 09:25:03 env -- scripts/common.sh@355 -- # echo 2 00:06:08.118 09:25:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.118 09:25:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.118 09:25:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.118 09:25:03 env -- scripts/common.sh@368 -- # return 0 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.118 --rc genhtml_branch_coverage=1 00:06:08.118 --rc genhtml_function_coverage=1 00:06:08.118 --rc genhtml_legend=1 00:06:08.118 --rc geninfo_all_blocks=1 00:06:08.118 --rc geninfo_unexecuted_blocks=1 00:06:08.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.118 ' 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.118 --rc genhtml_branch_coverage=1 00:06:08.118 --rc genhtml_function_coverage=1 00:06:08.118 --rc genhtml_legend=1 00:06:08.118 --rc geninfo_all_blocks=1 00:06:08.118 --rc geninfo_unexecuted_blocks=1 00:06:08.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.118 ' 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.118 --rc genhtml_branch_coverage=1 00:06:08.118 --rc genhtml_function_coverage=1 00:06:08.118 --rc genhtml_legend=1 00:06:08.118 --rc geninfo_all_blocks=1 00:06:08.118 --rc geninfo_unexecuted_blocks=1 00:06:08.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.118 ' 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.118 --rc genhtml_branch_coverage=1 00:06:08.118 --rc genhtml_function_coverage=1 00:06:08.118 --rc genhtml_legend=1 00:06:08.118 --rc geninfo_all_blocks=1 00:06:08.118 --rc geninfo_unexecuted_blocks=1 00:06:08.118 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.118 ' 00:06:08.118 09:25:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.118 09:25:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.118 09:25:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.118 ************************************ 00:06:08.118 START TEST env_memory 00:06:08.118 ************************************ 00:06:08.118 09:25:03 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:08.118 00:06:08.118 00:06:08.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.118 http://cunit.sourceforge.net/ 00:06:08.118 00:06:08.118 00:06:08.118 Suite: memory 00:06:08.118 Test: alloc and free memory map ...[2024-10-07 09:25:03.438244] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:08.118 passed 00:06:08.118 Test: mem map translation ...[2024-10-07 09:25:03.452298] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:08.118 [2024-10-07 09:25:03.452316] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:08.118 [2024-10-07 09:25:03.452348] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:08.119 [2024-10-07 09:25:03.452356] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:08.119 passed 00:06:08.119 Test: mem map registration ...[2024-10-07 09:25:03.472859] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:08.119 [2024-10-07 09:25:03.472877] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:08.119 passed 00:06:08.119 Test: mem map adjacent registrations ...passed 00:06:08.119 00:06:08.119 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.119 suites 1 1 n/a 0 0 00:06:08.119 tests 4 4 4 0 0 00:06:08.119 asserts 152 152 152 0 n/a 00:06:08.119 00:06:08.119 Elapsed time = 0.087 seconds 00:06:08.119 00:06:08.119 real 0m0.099s 00:06:08.119 user 0m0.089s 00:06:08.119 sys 0m0.010s 00:06:08.119 09:25:03 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.119 09:25:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:08.119 ************************************ 00:06:08.119 END TEST env_memory 00:06:08.119 ************************************ 00:06:08.119 09:25:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:08.119 09:25:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.119 09:25:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.119 09:25:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.119 ************************************ 00:06:08.119 START TEST env_vtophys 00:06:08.119 ************************************ 00:06:08.119 09:25:03 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:08.119 EAL: lib.eal log level changed from notice to debug 00:06:08.119 EAL: Detected lcore 0 as core 0 on socket 0 00:06:08.119 EAL: Detected lcore 1 as core 1 on socket 0 00:06:08.119 EAL: Detected lcore 2 as core 2 on socket 0 00:06:08.119 EAL: Detected lcore 3 as core 3 on socket 0 00:06:08.119 EAL: Detected lcore 4 as core 4 on socket 0 00:06:08.119 EAL: Detected lcore 5 as core 8 on socket 0 00:06:08.119 EAL: Detected lcore 6 as core 9 on socket 0 00:06:08.119 EAL: Detected lcore 7 as core 10 on socket 0 00:06:08.119 EAL: Detected lcore 8 as core 11 on socket 0 00:06:08.119 EAL: Detected lcore 9 as core 16 on socket 0 00:06:08.119 EAL: Detected lcore 10 as core 17 on socket 0 00:06:08.119 EAL: Detected lcore 11 as core 18 on socket 0 00:06:08.119 EAL: Detected lcore 12 as core 19 on socket 0 00:06:08.119 EAL: Detected lcore 13 as core 20 on socket 0 00:06:08.119 EAL: Detected lcore 14 as core 24 on socket 0 00:06:08.119 EAL: Detected lcore 15 as core 25 on socket 0 00:06:08.119 EAL: Detected lcore 16 as core 26 on socket 0 00:06:08.119 EAL: Detected lcore 17 as core 27 on socket 0 00:06:08.119 EAL: Detected lcore 18 as core 0 on socket 1 00:06:08.119 EAL: Detected lcore 19 as core 1 on socket 1 00:06:08.119 EAL: Detected lcore 20 as core 2 on socket 1 00:06:08.119 EAL: Detected lcore 21 as core 3 on socket 1 00:06:08.119 EAL: Detected lcore 22 as core 4 on socket 1 00:06:08.119 EAL: Detected lcore 23 as core 8 on socket 1 00:06:08.119 EAL: Detected lcore 24 as core 9 on socket 1 00:06:08.119 EAL: Detected lcore 25 as core 10 on socket 1 00:06:08.119 EAL: Detected lcore 26 as core 11 on socket 1 00:06:08.119 EAL: Detected lcore 27 as core 16 on socket 1 00:06:08.119 EAL: Detected lcore 28 as core 17 on socket 1 00:06:08.119 EAL: Detected lcore 29 as core 18 on socket 1 00:06:08.119 EAL: Detected lcore 30 as core 19 on socket 1 00:06:08.119 EAL: Detected lcore 31 as core 20 on socket 1 00:06:08.119 EAL: Detected lcore 32 as core 24 on socket 1 00:06:08.119 EAL: Detected lcore 33 as core 25 on socket 1 00:06:08.119 EAL: Detected lcore 34 as core 26 on socket 1 00:06:08.119 EAL: Detected lcore 35 as core 27 on socket 1 00:06:08.119 EAL: Detected lcore 36 as core 0 on socket 0 00:06:08.119 EAL: Detected lcore 37 as core 1 on socket 0 00:06:08.119 EAL: Detected lcore 38 as core 2 on socket 0 00:06:08.119 EAL: Detected lcore 39 as core 3 on socket 0 00:06:08.119 EAL: Detected lcore 40 as core 4 on socket 0 00:06:08.119 EAL: Detected lcore 41 as core 8 on socket 0 00:06:08.119 EAL: Detected lcore 42 as core 9 on socket 0 00:06:08.119 EAL: Detected lcore 43 as core 10 on socket 0 00:06:08.119 EAL: Detected lcore 44 as core 11 on socket 0 00:06:08.119 EAL: Detected lcore 45 as core 16 on socket 0 00:06:08.119 EAL: Detected lcore 46 as core 17 on socket 0 00:06:08.119 EAL: Detected lcore 47 as core 18 on socket 0 00:06:08.119 EAL: Detected lcore 48 as core 19 on socket 0 00:06:08.119 EAL: Detected lcore 49 as core 20 on socket 0 00:06:08.119 EAL: Detected lcore 50 as core 24 on socket 0 00:06:08.119 EAL: Detected lcore 51 as core 25 on socket 0 00:06:08.119 EAL: Detected lcore 52 as core 26 on socket 0 00:06:08.119 EAL: Detected lcore 53 as core 27 on socket 0 00:06:08.119 EAL: Detected lcore 54 as core 0 on socket 1 00:06:08.119 EAL: Detected lcore 55 as core 1 on socket 1 00:06:08.119 EAL: Detected lcore 56 as core 2 on socket 1 00:06:08.119 EAL: Detected lcore 57 as core 3 on socket 1 00:06:08.119 EAL: Detected lcore 58 as core 4 on socket 1 00:06:08.119 EAL: Detected lcore 59 as core 8 on socket 1 00:06:08.119 EAL: Detected lcore 60 as core 9 on socket 1 00:06:08.119 EAL: Detected lcore 61 as core 10 on socket 1 00:06:08.119 EAL: Detected lcore 62 as core 11 on socket 1 00:06:08.119 EAL: Detected lcore 63 as core 16 on socket 1 00:06:08.119 EAL: Detected lcore 64 as core 17 on socket 1 00:06:08.119 EAL: Detected lcore 65 as core 18 on socket 1 00:06:08.119 EAL: Detected lcore 66 as core 19 on socket 1 00:06:08.119 EAL: Detected lcore 67 as core 20 on socket 1 00:06:08.119 EAL: Detected lcore 68 as core 24 on socket 1 00:06:08.119 EAL: Detected lcore 69 as core 25 on socket 1 00:06:08.119 EAL: Detected lcore 70 as core 26 on socket 1 00:06:08.119 EAL: Detected lcore 71 as core 27 on socket 1 00:06:08.119 EAL: Maximum logical cores by configuration: 128 00:06:08.119 EAL: Detected CPU lcores: 72 00:06:08.119 EAL: Detected NUMA nodes: 2 00:06:08.119 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:08.119 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:08.119 EAL: Checking presence of .so 'librte_eal.so' 00:06:08.119 EAL: Detected static linkage of DPDK 00:06:08.119 EAL: No shared files mode enabled, IPC will be disabled 00:06:08.119 EAL: Bus pci wants IOVA as 'DC' 00:06:08.119 EAL: Buses did not request a specific IOVA mode. 00:06:08.119 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:08.119 EAL: Selected IOVA mode 'VA' 00:06:08.119 EAL: Probing VFIO support... 00:06:08.119 EAL: IOMMU type 1 (Type 1) is supported 00:06:08.119 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:08.119 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:08.119 EAL: VFIO support initialized 00:06:08.119 EAL: Ask a virtual area of 0x2e000 bytes 00:06:08.119 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:08.119 EAL: Setting up physically contiguous memory... 00:06:08.119 EAL: Setting maximum number of open files to 524288 00:06:08.119 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:08.119 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:08.119 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:08.119 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:08.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:08.119 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:08.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:08.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:08.119 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:08.119 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:08.119 EAL: Hugepages will be freed exactly as allocated. 00:06:08.119 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: TSC frequency is ~2300000 KHz 00:06:08.120 EAL: Main lcore 0 is ready (tid=7ff68553aa00;cpuset=[0]) 00:06:08.120 EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.120 EAL: Restoring previous memory policy: 0 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was expanded by 2MB 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Mem event callback 'spdk:(nil)' registered 00:06:08.120 00:06:08.120 00:06:08.120 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.120 http://cunit.sourceforge.net/ 00:06:08.120 00:06:08.120 00:06:08.120 Suite: components_suite 00:06:08.120 Test: vtophys_malloc_test ...passed 00:06:08.120 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.120 EAL: Restoring previous memory policy: 4 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was expanded by 4MB 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was shrunk by 4MB 00:06:08.120 EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.120 EAL: Restoring previous memory policy: 4 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was expanded by 6MB 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was shrunk by 6MB 00:06:08.120 EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.120 EAL: Restoring previous memory policy: 4 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was expanded by 10MB 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was shrunk by 10MB 00:06:08.120 EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.120 EAL: Restoring previous memory policy: 4 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was expanded by 18MB 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was shrunk by 18MB 00:06:08.120 EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.120 EAL: Restoring previous memory policy: 4 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was expanded by 34MB 00:06:08.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.120 EAL: request: mp_malloc_sync 00:06:08.120 EAL: No shared files mode enabled, IPC is disabled 00:06:08.120 EAL: Heap on socket 0 was shrunk by 34MB 00:06:08.120 EAL: Trying to obtain current memory policy. 00:06:08.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.379 EAL: Restoring previous memory policy: 4 00:06:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.379 EAL: request: mp_malloc_sync 00:06:08.379 EAL: No shared files mode enabled, IPC is disabled 00:06:08.379 EAL: Heap on socket 0 was expanded by 66MB 00:06:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.379 EAL: request: mp_malloc_sync 00:06:08.379 EAL: No shared files mode enabled, IPC is disabled 00:06:08.379 EAL: Heap on socket 0 was shrunk by 66MB 00:06:08.379 EAL: Trying to obtain current memory policy. 00:06:08.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.379 EAL: Restoring previous memory policy: 4 00:06:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.379 EAL: request: mp_malloc_sync 00:06:08.379 EAL: No shared files mode enabled, IPC is disabled 00:06:08.379 EAL: Heap on socket 0 was expanded by 130MB 00:06:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.379 EAL: request: mp_malloc_sync 00:06:08.379 EAL: No shared files mode enabled, IPC is disabled 00:06:08.379 EAL: Heap on socket 0 was shrunk by 130MB 00:06:08.379 EAL: Trying to obtain current memory policy. 00:06:08.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.379 EAL: Restoring previous memory policy: 4 00:06:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.379 EAL: request: mp_malloc_sync 00:06:08.379 EAL: No shared files mode enabled, IPC is disabled 00:06:08.379 EAL: Heap on socket 0 was expanded by 258MB 00:06:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.379 EAL: request: mp_malloc_sync 00:06:08.379 EAL: No shared files mode enabled, IPC is disabled 00:06:08.379 EAL: Heap on socket 0 was shrunk by 258MB 00:06:08.379 EAL: Trying to obtain current memory policy. 00:06:08.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.638 EAL: Restoring previous memory policy: 4 00:06:08.638 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.638 EAL: request: mp_malloc_sync 00:06:08.638 EAL: No shared files mode enabled, IPC is disabled 00:06:08.638 EAL: Heap on socket 0 was expanded by 514MB 00:06:08.638 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.897 EAL: request: mp_malloc_sync 00:06:08.897 EAL: No shared files mode enabled, IPC is disabled 00:06:08.897 EAL: Heap on socket 0 was shrunk by 514MB 00:06:08.897 EAL: Trying to obtain current memory policy. 00:06:08.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.897 EAL: Restoring previous memory policy: 4 00:06:08.897 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.897 EAL: request: mp_malloc_sync 00:06:08.897 EAL: No shared files mode enabled, IPC is disabled 00:06:08.897 EAL: Heap on socket 0 was expanded by 1026MB 00:06:09.156 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.415 EAL: request: mp_malloc_sync 00:06:09.415 EAL: No shared files mode enabled, IPC is disabled 00:06:09.415 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:09.415 passed 00:06:09.415 00:06:09.415 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.415 suites 1 1 n/a 0 0 00:06:09.415 tests 2 2 2 0 0 00:06:09.415 asserts 497 497 497 0 n/a 00:06:09.415 00:06:09.415 Elapsed time = 1.129 seconds 00:06:09.415 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.415 EAL: request: mp_malloc_sync 00:06:09.415 EAL: No shared files mode enabled, IPC is disabled 00:06:09.415 EAL: Heap on socket 0 was shrunk by 2MB 00:06:09.415 EAL: No shared files mode enabled, IPC is disabled 00:06:09.415 EAL: No shared files mode enabled, IPC is disabled 00:06:09.415 EAL: No shared files mode enabled, IPC is disabled 00:06:09.415 00:06:09.415 real 0m1.238s 00:06:09.415 user 0m0.718s 00:06:09.415 sys 0m0.491s 00:06:09.415 09:25:04 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.415 09:25:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:09.415 ************************************ 00:06:09.415 END TEST env_vtophys 00:06:09.415 ************************************ 00:06:09.415 09:25:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:09.415 09:25:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.415 09:25:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.415 09:25:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.415 ************************************ 00:06:09.415 START TEST env_pci 00:06:09.416 ************************************ 00:06:09.416 09:25:04 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:09.416 00:06:09.416 00:06:09.416 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.416 http://cunit.sourceforge.net/ 00:06:09.416 00:06:09.416 00:06:09.416 Suite: pci 00:06:09.416 Test: pci_hook ...[2024-10-07 09:25:04.896449] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 473798 has claimed it 00:06:09.416 EAL: Cannot find device (10000:00:01.0) 00:06:09.416 EAL: Failed to attach device on primary process 00:06:09.416 passed 00:06:09.416 00:06:09.416 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.416 suites 1 1 n/a 0 0 00:06:09.416 tests 1 1 1 0 0 00:06:09.416 asserts 25 25 25 0 n/a 00:06:09.416 00:06:09.416 Elapsed time = 0.031 seconds 00:06:09.416 00:06:09.416 real 0m0.043s 00:06:09.416 user 0m0.013s 00:06:09.416 sys 0m0.030s 00:06:09.416 09:25:04 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.416 09:25:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:09.416 ************************************ 00:06:09.416 END TEST env_pci 00:06:09.416 ************************************ 00:06:09.416 09:25:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:09.416 09:25:04 env -- env/env.sh@15 -- # uname 00:06:09.416 09:25:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:09.416 09:25:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:09.416 09:25:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:09.416 09:25:04 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:09.416 09:25:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.416 09:25:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.675 ************************************ 00:06:09.675 START TEST env_dpdk_post_init 00:06:09.675 ************************************ 00:06:09.675 09:25:05 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:09.675 EAL: Detected CPU lcores: 72 00:06:09.675 EAL: Detected NUMA nodes: 2 00:06:09.675 EAL: Detected static linkage of DPDK 00:06:09.675 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:09.675 EAL: Selected IOVA mode 'VA' 00:06:09.675 EAL: VFIO support initialized 00:06:09.675 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:09.675 EAL: Using IOMMU type 1 (Type 1) 00:06:10.610 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:15.886 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:15.886 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:16.145 Starting DPDK initialization... 00:06:16.145 Starting SPDK post initialization... 00:06:16.145 SPDK NVMe probe 00:06:16.145 Attaching to 0000:1a:00.0 00:06:16.145 Attached to 0000:1a:00.0 00:06:16.145 Cleaning up... 00:06:16.145 00:06:16.145 real 0m6.531s 00:06:16.145 user 0m4.787s 00:06:16.145 sys 0m0.995s 00:06:16.145 09:25:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.145 09:25:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.145 ************************************ 00:06:16.145 END TEST env_dpdk_post_init 00:06:16.145 ************************************ 00:06:16.145 09:25:11 env -- env/env.sh@26 -- # uname 00:06:16.145 09:25:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:16.145 09:25:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:16.145 09:25:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.145 09:25:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.145 09:25:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.145 ************************************ 00:06:16.145 START TEST env_mem_callbacks 00:06:16.145 ************************************ 00:06:16.145 09:25:11 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:16.145 EAL: Detected CPU lcores: 72 00:06:16.145 EAL: Detected NUMA nodes: 2 00:06:16.145 EAL: Detected static linkage of DPDK 00:06:16.145 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:16.145 EAL: Selected IOVA mode 'VA' 00:06:16.145 EAL: VFIO support initialized 00:06:16.145 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:16.145 00:06:16.145 00:06:16.145 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.145 http://cunit.sourceforge.net/ 00:06:16.145 00:06:16.145 00:06:16.145 Suite: memory 00:06:16.145 Test: test ... 00:06:16.145 register 0x200000200000 2097152 00:06:16.145 malloc 3145728 00:06:16.145 register 0x200000400000 4194304 00:06:16.145 buf 0x200000500000 len 3145728 PASSED 00:06:16.145 malloc 64 00:06:16.145 buf 0x2000004fff40 len 64 PASSED 00:06:16.145 malloc 4194304 00:06:16.145 register 0x200000800000 6291456 00:06:16.145 buf 0x200000a00000 len 4194304 PASSED 00:06:16.145 free 0x200000500000 3145728 00:06:16.145 free 0x2000004fff40 64 00:06:16.145 unregister 0x200000400000 4194304 PASSED 00:06:16.145 free 0x200000a00000 4194304 00:06:16.145 unregister 0x200000800000 6291456 PASSED 00:06:16.145 malloc 8388608 00:06:16.145 register 0x200000400000 10485760 00:06:16.145 buf 0x200000600000 len 8388608 PASSED 00:06:16.145 free 0x200000600000 8388608 00:06:16.145 unregister 0x200000400000 10485760 PASSED 00:06:16.145 passed 00:06:16.145 00:06:16.145 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.145 suites 1 1 n/a 0 0 00:06:16.145 tests 1 1 1 0 0 00:06:16.146 asserts 15 15 15 0 n/a 00:06:16.146 00:06:16.146 Elapsed time = 0.006 seconds 00:06:16.146 00:06:16.146 real 0m0.068s 00:06:16.146 user 0m0.025s 00:06:16.146 sys 0m0.042s 00:06:16.146 09:25:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.146 09:25:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:16.146 ************************************ 00:06:16.146 END TEST env_mem_callbacks 00:06:16.146 ************************************ 00:06:16.405 00:06:16.405 real 0m8.553s 00:06:16.405 user 0m5.878s 00:06:16.405 sys 0m1.938s 00:06:16.405 09:25:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.405 09:25:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.405 ************************************ 00:06:16.405 END TEST env 00:06:16.405 ************************************ 00:06:16.405 09:25:11 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:16.405 09:25:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.405 09:25:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.405 09:25:11 -- common/autotest_common.sh@10 -- # set +x 00:06:16.405 ************************************ 00:06:16.405 START TEST rpc 00:06:16.405 ************************************ 00:06:16.405 09:25:11 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:16.405 * Looking for test storage... 00:06:16.405 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:16.405 09:25:11 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.405 09:25:11 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.405 09:25:11 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.664 09:25:11 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.664 09:25:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.664 09:25:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.664 09:25:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.664 09:25:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.664 09:25:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.664 09:25:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.664 09:25:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.664 09:25:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.664 09:25:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.664 09:25:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.664 09:25:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.664 09:25:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.664 09:25:11 rpc -- scripts/common.sh@345 -- # : 1 00:06:16.664 09:25:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.664 09:25:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.664 09:25:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.664 09:25:12 rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.664 09:25:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.664 09:25:12 rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.664 09:25:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.664 09:25:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.664 09:25:12 rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.664 09:25:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.664 09:25:12 rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.664 09:25:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.664 09:25:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.664 09:25:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.664 09:25:12 rpc -- scripts/common.sh@368 -- # return 0 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.664 --rc genhtml_branch_coverage=1 00:06:16.664 --rc genhtml_function_coverage=1 00:06:16.664 --rc genhtml_legend=1 00:06:16.664 --rc geninfo_all_blocks=1 00:06:16.664 --rc geninfo_unexecuted_blocks=1 00:06:16.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.664 ' 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.664 --rc genhtml_branch_coverage=1 00:06:16.664 --rc genhtml_function_coverage=1 00:06:16.664 --rc genhtml_legend=1 00:06:16.664 --rc geninfo_all_blocks=1 00:06:16.664 --rc geninfo_unexecuted_blocks=1 00:06:16.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.664 ' 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.664 --rc genhtml_branch_coverage=1 00:06:16.664 --rc genhtml_function_coverage=1 00:06:16.664 --rc genhtml_legend=1 00:06:16.664 --rc geninfo_all_blocks=1 00:06:16.664 --rc geninfo_unexecuted_blocks=1 00:06:16.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.664 ' 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.664 --rc genhtml_branch_coverage=1 00:06:16.664 --rc genhtml_function_coverage=1 00:06:16.664 --rc genhtml_legend=1 00:06:16.664 --rc geninfo_all_blocks=1 00:06:16.664 --rc geninfo_unexecuted_blocks=1 00:06:16.664 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:16.664 ' 00:06:16.664 09:25:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=474930 00:06:16.664 09:25:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.664 09:25:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 474930 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 474930 ']' 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.664 09:25:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.664 09:25:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.664 [2024-10-07 09:25:12.038214] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:16.664 [2024-10-07 09:25:12.038299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474930 ] 00:06:16.664 [2024-10-07 09:25:12.110340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.664 [2024-10-07 09:25:12.198848] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:16.664 [2024-10-07 09:25:12.198892] app.c: 614:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 474930' to capture a snapshot of events at runtime. 00:06:16.664 [2024-10-07 09:25:12.198902] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.664 [2024-10-07 09:25:12.198920] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.664 [2024-10-07 09:25:12.198929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid474930 for offline analysis/debug. 00:06:16.664 [2024-10-07 09:25:12.199428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.600 09:25:12 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.600 09:25:12 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.600 09:25:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:17.600 09:25:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:17.600 09:25:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:17.600 09:25:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:17.600 09:25:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.600 09:25:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.600 09:25:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 ************************************ 00:06:17.600 START TEST rpc_integrity 00:06:17.600 ************************************ 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 09:25:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.600 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:17.600 { 00:06:17.600 "name": "Malloc0", 00:06:17.600 "aliases": [ 00:06:17.600 "532c8bed-73ae-4d00-b268-cf46b0715b5f" 00:06:17.600 ], 00:06:17.600 "product_name": "Malloc disk", 00:06:17.600 "block_size": 512, 00:06:17.600 "num_blocks": 16384, 00:06:17.600 "uuid": "532c8bed-73ae-4d00-b268-cf46b0715b5f", 00:06:17.600 "assigned_rate_limits": { 00:06:17.600 "rw_ios_per_sec": 0, 00:06:17.600 "rw_mbytes_per_sec": 0, 00:06:17.600 "r_mbytes_per_sec": 0, 00:06:17.600 "w_mbytes_per_sec": 0 00:06:17.600 }, 00:06:17.600 "claimed": false, 00:06:17.600 "zoned": false, 00:06:17.600 "supported_io_types": { 00:06:17.600 "read": true, 00:06:17.600 "write": true, 00:06:17.600 "unmap": true, 00:06:17.600 "flush": true, 00:06:17.600 "reset": true, 00:06:17.600 "nvme_admin": false, 00:06:17.600 "nvme_io": false, 00:06:17.600 "nvme_io_md": false, 00:06:17.600 "write_zeroes": true, 00:06:17.600 "zcopy": true, 00:06:17.600 "get_zone_info": false, 00:06:17.600 "zone_management": false, 00:06:17.600 "zone_append": false, 00:06:17.600 "compare": false, 00:06:17.600 "compare_and_write": false, 00:06:17.600 "abort": true, 00:06:17.600 "seek_hole": false, 00:06:17.601 "seek_data": false, 00:06:17.601 "copy": true, 00:06:17.601 "nvme_iov_md": false 00:06:17.601 }, 00:06:17.601 "memory_domains": [ 00:06:17.601 { 00:06:17.601 "dma_device_id": "system", 00:06:17.601 "dma_device_type": 1 00:06:17.601 }, 00:06:17.601 { 00:06:17.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.601 "dma_device_type": 2 00:06:17.601 } 00:06:17.601 ], 00:06:17.601 "driver_specific": {} 00:06:17.601 } 00:06:17.601 ]' 00:06:17.601 09:25:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.601 [2024-10-07 09:25:13.019587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:17.601 [2024-10-07 09:25:13.019620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.601 [2024-10-07 09:25:13.019637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4a32f70 00:06:17.601 [2024-10-07 09:25:13.019647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.601 [2024-10-07 09:25:13.020573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.601 [2024-10-07 09:25:13.020598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:17.601 Passthru0 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:17.601 { 00:06:17.601 "name": "Malloc0", 00:06:17.601 "aliases": [ 00:06:17.601 "532c8bed-73ae-4d00-b268-cf46b0715b5f" 00:06:17.601 ], 00:06:17.601 "product_name": "Malloc disk", 00:06:17.601 "block_size": 512, 00:06:17.601 "num_blocks": 16384, 00:06:17.601 "uuid": "532c8bed-73ae-4d00-b268-cf46b0715b5f", 00:06:17.601 "assigned_rate_limits": { 00:06:17.601 "rw_ios_per_sec": 0, 00:06:17.601 "rw_mbytes_per_sec": 0, 00:06:17.601 "r_mbytes_per_sec": 0, 00:06:17.601 "w_mbytes_per_sec": 0 00:06:17.601 }, 00:06:17.601 "claimed": true, 00:06:17.601 "claim_type": "exclusive_write", 00:06:17.601 "zoned": false, 00:06:17.601 "supported_io_types": { 00:06:17.601 "read": true, 00:06:17.601 "write": true, 00:06:17.601 "unmap": true, 00:06:17.601 "flush": true, 00:06:17.601 "reset": true, 00:06:17.601 "nvme_admin": false, 00:06:17.601 "nvme_io": false, 00:06:17.601 "nvme_io_md": false, 00:06:17.601 "write_zeroes": true, 00:06:17.601 "zcopy": true, 00:06:17.601 "get_zone_info": false, 00:06:17.601 "zone_management": false, 00:06:17.601 "zone_append": false, 00:06:17.601 "compare": false, 00:06:17.601 "compare_and_write": false, 00:06:17.601 "abort": true, 00:06:17.601 "seek_hole": false, 00:06:17.601 "seek_data": false, 00:06:17.601 "copy": true, 00:06:17.601 "nvme_iov_md": false 00:06:17.601 }, 00:06:17.601 "memory_domains": [ 00:06:17.601 { 00:06:17.601 "dma_device_id": "system", 00:06:17.601 "dma_device_type": 1 00:06:17.601 }, 00:06:17.601 { 00:06:17.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.601 "dma_device_type": 2 00:06:17.601 } 00:06:17.601 ], 00:06:17.601 "driver_specific": {} 00:06:17.601 }, 00:06:17.601 { 00:06:17.601 "name": "Passthru0", 00:06:17.601 "aliases": [ 00:06:17.601 "0633c622-7f9c-5a5d-b36c-cf97d7c85cb7" 00:06:17.601 ], 00:06:17.601 "product_name": "passthru", 00:06:17.601 "block_size": 512, 00:06:17.601 "num_blocks": 16384, 00:06:17.601 "uuid": "0633c622-7f9c-5a5d-b36c-cf97d7c85cb7", 00:06:17.601 "assigned_rate_limits": { 00:06:17.601 "rw_ios_per_sec": 0, 00:06:17.601 "rw_mbytes_per_sec": 0, 00:06:17.601 "r_mbytes_per_sec": 0, 00:06:17.601 "w_mbytes_per_sec": 0 00:06:17.601 }, 00:06:17.601 "claimed": false, 00:06:17.601 "zoned": false, 00:06:17.601 "supported_io_types": { 00:06:17.601 "read": true, 00:06:17.601 "write": true, 00:06:17.601 "unmap": true, 00:06:17.601 "flush": true, 00:06:17.601 "reset": true, 00:06:17.601 "nvme_admin": false, 00:06:17.601 "nvme_io": false, 00:06:17.601 "nvme_io_md": false, 00:06:17.601 "write_zeroes": true, 00:06:17.601 "zcopy": true, 00:06:17.601 "get_zone_info": false, 00:06:17.601 "zone_management": false, 00:06:17.601 "zone_append": false, 00:06:17.601 "compare": false, 00:06:17.601 "compare_and_write": false, 00:06:17.601 "abort": true, 00:06:17.601 "seek_hole": false, 00:06:17.601 "seek_data": false, 00:06:17.601 "copy": true, 00:06:17.601 "nvme_iov_md": false 00:06:17.601 }, 00:06:17.601 "memory_domains": [ 00:06:17.601 { 00:06:17.601 "dma_device_id": "system", 00:06:17.601 "dma_device_type": 1 00:06:17.601 }, 00:06:17.601 { 00:06:17.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.601 "dma_device_type": 2 00:06:17.601 } 00:06:17.601 ], 00:06:17.601 "driver_specific": { 00:06:17.601 "passthru": { 00:06:17.601 "name": "Passthru0", 00:06:17.601 "base_bdev_name": "Malloc0" 00:06:17.601 } 00:06:17.601 } 00:06:17.601 } 00:06:17.601 ]' 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:17.601 09:25:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:17.601 00:06:17.601 real 0m0.248s 00:06:17.601 user 0m0.151s 00:06:17.601 sys 0m0.041s 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.601 09:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.601 ************************************ 00:06:17.601 END TEST rpc_integrity 00:06:17.601 ************************************ 00:06:17.860 09:25:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:17.860 09:25:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.860 09:25:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.860 09:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.860 ************************************ 00:06:17.860 START TEST rpc_plugins 00:06:17.860 ************************************ 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:17.860 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.860 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:17.860 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.860 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.860 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:17.860 { 00:06:17.860 "name": "Malloc1", 00:06:17.860 "aliases": [ 00:06:17.860 "13f354bb-fbf6-432b-8741-4bcf4b049adb" 00:06:17.860 ], 00:06:17.860 "product_name": "Malloc disk", 00:06:17.860 "block_size": 4096, 00:06:17.860 "num_blocks": 256, 00:06:17.860 "uuid": "13f354bb-fbf6-432b-8741-4bcf4b049adb", 00:06:17.860 "assigned_rate_limits": { 00:06:17.860 "rw_ios_per_sec": 0, 00:06:17.860 "rw_mbytes_per_sec": 0, 00:06:17.860 "r_mbytes_per_sec": 0, 00:06:17.860 "w_mbytes_per_sec": 0 00:06:17.860 }, 00:06:17.860 "claimed": false, 00:06:17.860 "zoned": false, 00:06:17.861 "supported_io_types": { 00:06:17.861 "read": true, 00:06:17.861 "write": true, 00:06:17.861 "unmap": true, 00:06:17.861 "flush": true, 00:06:17.861 "reset": true, 00:06:17.861 "nvme_admin": false, 00:06:17.861 "nvme_io": false, 00:06:17.861 "nvme_io_md": false, 00:06:17.861 "write_zeroes": true, 00:06:17.861 "zcopy": true, 00:06:17.861 "get_zone_info": false, 00:06:17.861 "zone_management": false, 00:06:17.861 "zone_append": false, 00:06:17.861 "compare": false, 00:06:17.861 "compare_and_write": false, 00:06:17.861 "abort": true, 00:06:17.861 "seek_hole": false, 00:06:17.861 "seek_data": false, 00:06:17.861 "copy": true, 00:06:17.861 "nvme_iov_md": false 00:06:17.861 }, 00:06:17.861 "memory_domains": [ 00:06:17.861 { 00:06:17.861 "dma_device_id": "system", 00:06:17.861 "dma_device_type": 1 00:06:17.861 }, 00:06:17.861 { 00:06:17.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.861 "dma_device_type": 2 00:06:17.861 } 00:06:17.861 ], 00:06:17.861 "driver_specific": {} 00:06:17.861 } 00:06:17.861 ]' 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:17.861 09:25:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:17.861 00:06:17.861 real 0m0.129s 00:06:17.861 user 0m0.088s 00:06:17.861 sys 0m0.015s 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.861 09:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.861 ************************************ 00:06:17.861 END TEST rpc_plugins 00:06:17.861 ************************************ 00:06:17.861 09:25:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:17.861 09:25:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.861 09:25:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.861 09:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.120 ************************************ 00:06:18.120 START TEST rpc_trace_cmd_test 00:06:18.120 ************************************ 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.120 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:18.120 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid474930", 00:06:18.120 "tpoint_group_mask": "0x8", 00:06:18.120 "iscsi_conn": { 00:06:18.120 "mask": "0x2", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "scsi": { 00:06:18.120 "mask": "0x4", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "bdev": { 00:06:18.120 "mask": "0x8", 00:06:18.120 "tpoint_mask": "0xffffffffffffffff" 00:06:18.120 }, 00:06:18.120 "nvmf_rdma": { 00:06:18.120 "mask": "0x10", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "nvmf_tcp": { 00:06:18.120 "mask": "0x20", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "ftl": { 00:06:18.120 "mask": "0x40", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "blobfs": { 00:06:18.120 "mask": "0x80", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "dsa": { 00:06:18.120 "mask": "0x200", 00:06:18.120 "tpoint_mask": "0x0" 00:06:18.120 }, 00:06:18.120 "thread": { 00:06:18.120 "mask": "0x400", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "nvme_pcie": { 00:06:18.121 "mask": "0x800", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "iaa": { 00:06:18.121 "mask": "0x1000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "nvme_tcp": { 00:06:18.121 "mask": "0x2000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "bdev_nvme": { 00:06:18.121 "mask": "0x4000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "sock": { 00:06:18.121 "mask": "0x8000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "blob": { 00:06:18.121 "mask": "0x10000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "bdev_raid": { 00:06:18.121 "mask": "0x20000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 }, 00:06:18.121 "scheduler": { 00:06:18.121 "mask": "0x40000", 00:06:18.121 "tpoint_mask": "0x0" 00:06:18.121 } 00:06:18.121 }' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:18.121 00:06:18.121 real 0m0.197s 00:06:18.121 user 0m0.163s 00:06:18.121 sys 0m0.025s 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.121 09:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.121 ************************************ 00:06:18.121 END TEST rpc_trace_cmd_test 00:06:18.121 ************************************ 00:06:18.121 09:25:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:18.121 09:25:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:18.121 09:25:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:18.121 09:25:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.121 09:25:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.380 09:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.380 ************************************ 00:06:18.380 START TEST rpc_daemon_integrity 00:06:18.380 ************************************ 00:06:18.380 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:18.380 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:18.381 { 00:06:18.381 "name": "Malloc2", 00:06:18.381 "aliases": [ 00:06:18.381 "51503fa5-58c3-4398-94e9-d9ae9e9f922e" 00:06:18.381 ], 00:06:18.381 "product_name": "Malloc disk", 00:06:18.381 "block_size": 512, 00:06:18.381 "num_blocks": 16384, 00:06:18.381 "uuid": "51503fa5-58c3-4398-94e9-d9ae9e9f922e", 00:06:18.381 "assigned_rate_limits": { 00:06:18.381 "rw_ios_per_sec": 0, 00:06:18.381 "rw_mbytes_per_sec": 0, 00:06:18.381 "r_mbytes_per_sec": 0, 00:06:18.381 "w_mbytes_per_sec": 0 00:06:18.381 }, 00:06:18.381 "claimed": false, 00:06:18.381 "zoned": false, 00:06:18.381 "supported_io_types": { 00:06:18.381 "read": true, 00:06:18.381 "write": true, 00:06:18.381 "unmap": true, 00:06:18.381 "flush": true, 00:06:18.381 "reset": true, 00:06:18.381 "nvme_admin": false, 00:06:18.381 "nvme_io": false, 00:06:18.381 "nvme_io_md": false, 00:06:18.381 "write_zeroes": true, 00:06:18.381 "zcopy": true, 00:06:18.381 "get_zone_info": false, 00:06:18.381 "zone_management": false, 00:06:18.381 "zone_append": false, 00:06:18.381 "compare": false, 00:06:18.381 "compare_and_write": false, 00:06:18.381 "abort": true, 00:06:18.381 "seek_hole": false, 00:06:18.381 "seek_data": false, 00:06:18.381 "copy": true, 00:06:18.381 "nvme_iov_md": false 00:06:18.381 }, 00:06:18.381 "memory_domains": [ 00:06:18.381 { 00:06:18.381 "dma_device_id": "system", 00:06:18.381 "dma_device_type": 1 00:06:18.381 }, 00:06:18.381 { 00:06:18.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.381 "dma_device_type": 2 00:06:18.381 } 00:06:18.381 ], 00:06:18.381 "driver_specific": {} 00:06:18.381 } 00:06:18.381 ]' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 [2024-10-07 09:25:13.845731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:18.381 [2024-10-07 09:25:13.845764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.381 [2024-10-07 09:25:13.845781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4b55330 00:06:18.381 [2024-10-07 09:25:13.845790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.381 [2024-10-07 09:25:13.846694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.381 [2024-10-07 09:25:13.846719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:18.381 Passthru0 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:18.381 { 00:06:18.381 "name": "Malloc2", 00:06:18.381 "aliases": [ 00:06:18.381 "51503fa5-58c3-4398-94e9-d9ae9e9f922e" 00:06:18.381 ], 00:06:18.381 "product_name": "Malloc disk", 00:06:18.381 "block_size": 512, 00:06:18.381 "num_blocks": 16384, 00:06:18.381 "uuid": "51503fa5-58c3-4398-94e9-d9ae9e9f922e", 00:06:18.381 "assigned_rate_limits": { 00:06:18.381 "rw_ios_per_sec": 0, 00:06:18.381 "rw_mbytes_per_sec": 0, 00:06:18.381 "r_mbytes_per_sec": 0, 00:06:18.381 "w_mbytes_per_sec": 0 00:06:18.381 }, 00:06:18.381 "claimed": true, 00:06:18.381 "claim_type": "exclusive_write", 00:06:18.381 "zoned": false, 00:06:18.381 "supported_io_types": { 00:06:18.381 "read": true, 00:06:18.381 "write": true, 00:06:18.381 "unmap": true, 00:06:18.381 "flush": true, 00:06:18.381 "reset": true, 00:06:18.381 "nvme_admin": false, 00:06:18.381 "nvme_io": false, 00:06:18.381 "nvme_io_md": false, 00:06:18.381 "write_zeroes": true, 00:06:18.381 "zcopy": true, 00:06:18.381 "get_zone_info": false, 00:06:18.381 "zone_management": false, 00:06:18.381 "zone_append": false, 00:06:18.381 "compare": false, 00:06:18.381 "compare_and_write": false, 00:06:18.381 "abort": true, 00:06:18.381 "seek_hole": false, 00:06:18.381 "seek_data": false, 00:06:18.381 "copy": true, 00:06:18.381 "nvme_iov_md": false 00:06:18.381 }, 00:06:18.381 "memory_domains": [ 00:06:18.381 { 00:06:18.381 "dma_device_id": "system", 00:06:18.381 "dma_device_type": 1 00:06:18.381 }, 00:06:18.381 { 00:06:18.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.381 "dma_device_type": 2 00:06:18.381 } 00:06:18.381 ], 00:06:18.381 "driver_specific": {} 00:06:18.381 }, 00:06:18.381 { 00:06:18.381 "name": "Passthru0", 00:06:18.381 "aliases": [ 00:06:18.381 "d720d776-2778-507e-89af-3816ddb0fadc" 00:06:18.381 ], 00:06:18.381 "product_name": "passthru", 00:06:18.381 "block_size": 512, 00:06:18.381 "num_blocks": 16384, 00:06:18.381 "uuid": "d720d776-2778-507e-89af-3816ddb0fadc", 00:06:18.381 "assigned_rate_limits": { 00:06:18.381 "rw_ios_per_sec": 0, 00:06:18.381 "rw_mbytes_per_sec": 0, 00:06:18.381 "r_mbytes_per_sec": 0, 00:06:18.381 "w_mbytes_per_sec": 0 00:06:18.381 }, 00:06:18.381 "claimed": false, 00:06:18.381 "zoned": false, 00:06:18.381 "supported_io_types": { 00:06:18.381 "read": true, 00:06:18.381 "write": true, 00:06:18.381 "unmap": true, 00:06:18.381 "flush": true, 00:06:18.381 "reset": true, 00:06:18.381 "nvme_admin": false, 00:06:18.381 "nvme_io": false, 00:06:18.381 "nvme_io_md": false, 00:06:18.381 "write_zeroes": true, 00:06:18.381 "zcopy": true, 00:06:18.381 "get_zone_info": false, 00:06:18.381 "zone_management": false, 00:06:18.381 "zone_append": false, 00:06:18.381 "compare": false, 00:06:18.381 "compare_and_write": false, 00:06:18.381 "abort": true, 00:06:18.381 "seek_hole": false, 00:06:18.381 "seek_data": false, 00:06:18.381 "copy": true, 00:06:18.381 "nvme_iov_md": false 00:06:18.381 }, 00:06:18.381 "memory_domains": [ 00:06:18.381 { 00:06:18.381 "dma_device_id": "system", 00:06:18.381 "dma_device_type": 1 00:06:18.381 }, 00:06:18.381 { 00:06:18.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.381 "dma_device_type": 2 00:06:18.381 } 00:06:18.381 ], 00:06:18.381 "driver_specific": { 00:06:18.381 "passthru": { 00:06:18.381 "name": "Passthru0", 00:06:18.381 "base_bdev_name": "Malloc2" 00:06:18.381 } 00:06:18.381 } 00:06:18.381 } 00:06:18.381 ]' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:18.381 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:18.641 09:25:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:18.641 00:06:18.641 real 0m0.252s 00:06:18.641 user 0m0.155s 00:06:18.641 sys 0m0.044s 00:06:18.641 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.641 09:25:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.641 ************************************ 00:06:18.641 END TEST rpc_daemon_integrity 00:06:18.641 ************************************ 00:06:18.641 09:25:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:18.641 09:25:14 rpc -- rpc/rpc.sh@84 -- # killprocess 474930 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 474930 ']' 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@954 -- # kill -0 474930 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@955 -- # uname 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 474930 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 474930' 00:06:18.641 killing process with pid 474930 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@969 -- # kill 474930 00:06:18.641 09:25:14 rpc -- common/autotest_common.sh@974 -- # wait 474930 00:06:18.900 00:06:18.900 real 0m2.620s 00:06:18.900 user 0m3.216s 00:06:18.900 sys 0m0.840s 00:06:18.900 09:25:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.900 09:25:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.900 ************************************ 00:06:18.900 END TEST rpc 00:06:18.900 ************************************ 00:06:19.159 09:25:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:19.159 09:25:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.159 09:25:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.159 09:25:14 -- common/autotest_common.sh@10 -- # set +x 00:06:19.159 ************************************ 00:06:19.159 START TEST skip_rpc 00:06:19.159 ************************************ 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:19.159 * Looking for test storage... 00:06:19.159 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.159 09:25:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.159 09:25:14 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.159 --rc genhtml_branch_coverage=1 00:06:19.159 --rc genhtml_function_coverage=1 00:06:19.159 --rc genhtml_legend=1 00:06:19.159 --rc geninfo_all_blocks=1 00:06:19.160 --rc geninfo_unexecuted_blocks=1 00:06:19.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.160 ' 00:06:19.160 09:25:14 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.160 --rc genhtml_branch_coverage=1 00:06:19.160 --rc genhtml_function_coverage=1 00:06:19.160 --rc genhtml_legend=1 00:06:19.160 --rc geninfo_all_blocks=1 00:06:19.160 --rc geninfo_unexecuted_blocks=1 00:06:19.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.160 ' 00:06:19.160 09:25:14 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.160 --rc genhtml_branch_coverage=1 00:06:19.160 --rc genhtml_function_coverage=1 00:06:19.160 --rc genhtml_legend=1 00:06:19.160 --rc geninfo_all_blocks=1 00:06:19.160 --rc geninfo_unexecuted_blocks=1 00:06:19.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.160 ' 00:06:19.160 09:25:14 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.160 --rc genhtml_branch_coverage=1 00:06:19.160 --rc genhtml_function_coverage=1 00:06:19.160 --rc genhtml_legend=1 00:06:19.160 --rc geninfo_all_blocks=1 00:06:19.160 --rc geninfo_unexecuted_blocks=1 00:06:19.160 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:19.160 ' 00:06:19.160 09:25:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:19.160 09:25:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:19.160 09:25:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:19.160 09:25:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.160 09:25:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.160 09:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.419 ************************************ 00:06:19.419 START TEST skip_rpc 00:06:19.419 ************************************ 00:06:19.419 09:25:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:19.419 09:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=475469 00:06:19.419 09:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:19.419 09:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.419 09:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:19.419 [2024-10-07 09:25:14.741294] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:19.419 [2024-10-07 09:25:14.741336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475469 ] 00:06:19.419 [2024-10-07 09:25:14.812585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.419 [2024-10-07 09:25:14.893643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 475469 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 475469 ']' 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 475469 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475469 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475469' 00:06:24.696 killing process with pid 475469 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 475469 00:06:24.696 09:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 475469 00:06:24.696 00:06:24.696 real 0m5.436s 00:06:24.696 user 0m5.178s 00:06:24.696 sys 0m0.296s 00:06:24.696 09:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.696 09:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.696 ************************************ 00:06:24.696 END TEST skip_rpc 00:06:24.696 ************************************ 00:06:24.696 09:25:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:24.696 09:25:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.696 09:25:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.696 09:25:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.696 ************************************ 00:06:24.696 START TEST skip_rpc_with_json 00:06:24.696 ************************************ 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=476196 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 476196 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 476196 ']' 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.696 09:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.957 [2024-10-07 09:25:20.265633] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:24.957 [2024-10-07 09:25:20.265706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476196 ] 00:06:24.957 [2024-10-07 09:25:20.336912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.957 [2024-10-07 09:25:20.428327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.894 [2024-10-07 09:25:21.124049] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:25.894 request: 00:06:25.894 { 00:06:25.894 "trtype": "tcp", 00:06:25.894 "method": "nvmf_get_transports", 00:06:25.894 "req_id": 1 00:06:25.894 } 00:06:25.894 Got JSON-RPC error response 00:06:25.894 response: 00:06:25.894 { 00:06:25.894 "code": -19, 00:06:25.894 "message": "No such device" 00:06:25.894 } 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.894 [2024-10-07 09:25:21.132130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.894 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:25.894 { 00:06:25.894 "subsystems": [ 00:06:25.894 { 00:06:25.894 "subsystem": "scheduler", 00:06:25.894 "config": [ 00:06:25.894 { 00:06:25.894 "method": "framework_set_scheduler", 00:06:25.894 "params": { 00:06:25.894 "name": "static" 00:06:25.894 } 00:06:25.894 } 00:06:25.894 ] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "vmd", 00:06:25.894 "config": [] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "sock", 00:06:25.894 "config": [ 00:06:25.894 { 00:06:25.894 "method": "sock_set_default_impl", 00:06:25.894 "params": { 00:06:25.894 "impl_name": "posix" 00:06:25.894 } 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "method": "sock_impl_set_options", 00:06:25.894 "params": { 00:06:25.894 "impl_name": "ssl", 00:06:25.894 "recv_buf_size": 4096, 00:06:25.894 "send_buf_size": 4096, 00:06:25.894 "enable_recv_pipe": true, 00:06:25.894 "enable_quickack": false, 00:06:25.894 "enable_placement_id": 0, 00:06:25.894 "enable_zerocopy_send_server": true, 00:06:25.894 "enable_zerocopy_send_client": false, 00:06:25.894 "zerocopy_threshold": 0, 00:06:25.894 "tls_version": 0, 00:06:25.894 "enable_ktls": false 00:06:25.894 } 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "method": "sock_impl_set_options", 00:06:25.894 "params": { 00:06:25.894 "impl_name": "posix", 00:06:25.894 "recv_buf_size": 2097152, 00:06:25.894 "send_buf_size": 2097152, 00:06:25.894 "enable_recv_pipe": true, 00:06:25.894 "enable_quickack": false, 00:06:25.894 "enable_placement_id": 0, 00:06:25.894 "enable_zerocopy_send_server": true, 00:06:25.894 "enable_zerocopy_send_client": false, 00:06:25.894 "zerocopy_threshold": 0, 00:06:25.894 "tls_version": 0, 00:06:25.894 "enable_ktls": false 00:06:25.894 } 00:06:25.894 } 00:06:25.894 ] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "iobuf", 00:06:25.894 "config": [ 00:06:25.894 { 00:06:25.894 "method": "iobuf_set_options", 00:06:25.894 "params": { 00:06:25.894 "small_pool_count": 8192, 00:06:25.894 "large_pool_count": 1024, 00:06:25.894 "small_bufsize": 8192, 00:06:25.894 "large_bufsize": 135168 00:06:25.894 } 00:06:25.894 } 00:06:25.894 ] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "keyring", 00:06:25.894 "config": [] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "vfio_user_target", 00:06:25.894 "config": null 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "fsdev", 00:06:25.894 "config": [ 00:06:25.894 { 00:06:25.894 "method": "fsdev_set_opts", 00:06:25.894 "params": { 00:06:25.894 "fsdev_io_pool_size": 65535, 00:06:25.894 "fsdev_io_cache_size": 256 00:06:25.894 } 00:06:25.894 } 00:06:25.894 ] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "accel", 00:06:25.894 "config": [ 00:06:25.894 { 00:06:25.894 "method": "accel_set_options", 00:06:25.894 "params": { 00:06:25.894 "small_cache_size": 128, 00:06:25.894 "large_cache_size": 16, 00:06:25.894 "task_count": 2048, 00:06:25.894 "sequence_count": 2048, 00:06:25.894 "buf_count": 2048 00:06:25.894 } 00:06:25.894 } 00:06:25.894 ] 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "subsystem": "bdev", 00:06:25.894 "config": [ 00:06:25.894 { 00:06:25.894 "method": "bdev_set_options", 00:06:25.894 "params": { 00:06:25.894 "bdev_io_pool_size": 65535, 00:06:25.894 "bdev_io_cache_size": 256, 00:06:25.894 "bdev_auto_examine": true, 00:06:25.894 "iobuf_small_cache_size": 128, 00:06:25.894 "iobuf_large_cache_size": 16 00:06:25.894 } 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "method": "bdev_raid_set_options", 00:06:25.894 "params": { 00:06:25.894 "process_window_size_kb": 1024, 00:06:25.894 "process_max_bandwidth_mb_sec": 0 00:06:25.894 } 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "method": "bdev_nvme_set_options", 00:06:25.894 "params": { 00:06:25.894 "action_on_timeout": "none", 00:06:25.894 "timeout_us": 0, 00:06:25.894 "timeout_admin_us": 0, 00:06:25.894 "keep_alive_timeout_ms": 10000, 00:06:25.894 "arbitration_burst": 0, 00:06:25.894 "low_priority_weight": 0, 00:06:25.894 "medium_priority_weight": 0, 00:06:25.894 "high_priority_weight": 0, 00:06:25.894 "nvme_adminq_poll_period_us": 10000, 00:06:25.894 "nvme_ioq_poll_period_us": 0, 00:06:25.894 "io_queue_requests": 0, 00:06:25.894 "delay_cmd_submit": true, 00:06:25.894 "transport_retry_count": 4, 00:06:25.894 "bdev_retry_count": 3, 00:06:25.894 "transport_ack_timeout": 0, 00:06:25.894 "ctrlr_loss_timeout_sec": 0, 00:06:25.894 "reconnect_delay_sec": 0, 00:06:25.894 "fast_io_fail_timeout_sec": 0, 00:06:25.894 "disable_auto_failback": false, 00:06:25.894 "generate_uuids": false, 00:06:25.894 "transport_tos": 0, 00:06:25.894 "nvme_error_stat": false, 00:06:25.894 "rdma_srq_size": 0, 00:06:25.894 "io_path_stat": false, 00:06:25.894 "allow_accel_sequence": false, 00:06:25.894 "rdma_max_cq_size": 0, 00:06:25.894 "rdma_cm_event_timeout_ms": 0, 00:06:25.894 "dhchap_digests": [ 00:06:25.894 "sha256", 00:06:25.894 "sha384", 00:06:25.894 "sha512" 00:06:25.894 ], 00:06:25.894 "dhchap_dhgroups": [ 00:06:25.894 "null", 00:06:25.894 "ffdhe2048", 00:06:25.894 "ffdhe3072", 00:06:25.894 "ffdhe4096", 00:06:25.894 "ffdhe6144", 00:06:25.894 "ffdhe8192" 00:06:25.894 ] 00:06:25.894 } 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "method": "bdev_nvme_set_hotplug", 00:06:25.894 "params": { 00:06:25.894 "period_us": 100000, 00:06:25.894 "enable": false 00:06:25.894 } 00:06:25.894 }, 00:06:25.894 { 00:06:25.894 "method": "bdev_iscsi_set_options", 00:06:25.894 "params": { 00:06:25.894 "timeout_sec": 30 00:06:25.894 } 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "method": "bdev_wait_for_examine" 00:06:25.895 } 00:06:25.895 ] 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "nvmf", 00:06:25.895 "config": [ 00:06:25.895 { 00:06:25.895 "method": "nvmf_set_config", 00:06:25.895 "params": { 00:06:25.895 "discovery_filter": "match_any", 00:06:25.895 "admin_cmd_passthru": { 00:06:25.895 "identify_ctrlr": false 00:06:25.895 }, 00:06:25.895 "dhchap_digests": [ 00:06:25.895 "sha256", 00:06:25.895 "sha384", 00:06:25.895 "sha512" 00:06:25.895 ], 00:06:25.895 "dhchap_dhgroups": [ 00:06:25.895 "null", 00:06:25.895 "ffdhe2048", 00:06:25.895 "ffdhe3072", 00:06:25.895 "ffdhe4096", 00:06:25.895 "ffdhe6144", 00:06:25.895 "ffdhe8192" 00:06:25.895 ] 00:06:25.895 } 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "method": "nvmf_set_max_subsystems", 00:06:25.895 "params": { 00:06:25.895 "max_subsystems": 1024 00:06:25.895 } 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "method": "nvmf_set_crdt", 00:06:25.895 "params": { 00:06:25.895 "crdt1": 0, 00:06:25.895 "crdt2": 0, 00:06:25.895 "crdt3": 0 00:06:25.895 } 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "method": "nvmf_create_transport", 00:06:25.895 "params": { 00:06:25.895 "trtype": "TCP", 00:06:25.895 "max_queue_depth": 128, 00:06:25.895 "max_io_qpairs_per_ctrlr": 127, 00:06:25.895 "in_capsule_data_size": 4096, 00:06:25.895 "max_io_size": 131072, 00:06:25.895 "io_unit_size": 131072, 00:06:25.895 "max_aq_depth": 128, 00:06:25.895 "num_shared_buffers": 511, 00:06:25.895 "buf_cache_size": 4294967295, 00:06:25.895 "dif_insert_or_strip": false, 00:06:25.895 "zcopy": false, 00:06:25.895 "c2h_success": true, 00:06:25.895 "sock_priority": 0, 00:06:25.895 "abort_timeout_sec": 1, 00:06:25.895 "ack_timeout": 0, 00:06:25.895 "data_wr_pool_size": 0 00:06:25.895 } 00:06:25.895 } 00:06:25.895 ] 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "nbd", 00:06:25.895 "config": [] 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "ublk", 00:06:25.895 "config": [] 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "vhost_blk", 00:06:25.895 "config": [] 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "scsi", 00:06:25.895 "config": null 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "iscsi", 00:06:25.895 "config": [ 00:06:25.895 { 00:06:25.895 "method": "iscsi_set_options", 00:06:25.895 "params": { 00:06:25.895 "node_base": "iqn.2016-06.io.spdk", 00:06:25.895 "max_sessions": 128, 00:06:25.895 "max_connections_per_session": 2, 00:06:25.895 "max_queue_depth": 64, 00:06:25.895 "default_time2wait": 2, 00:06:25.895 "default_time2retain": 20, 00:06:25.895 "first_burst_length": 8192, 00:06:25.895 "immediate_data": true, 00:06:25.895 "allow_duplicated_isid": false, 00:06:25.895 "error_recovery_level": 0, 00:06:25.895 "nop_timeout": 60, 00:06:25.895 "nop_in_interval": 30, 00:06:25.895 "disable_chap": false, 00:06:25.895 "require_chap": false, 00:06:25.895 "mutual_chap": false, 00:06:25.895 "chap_group": 0, 00:06:25.895 "max_large_datain_per_connection": 64, 00:06:25.895 "max_r2t_per_connection": 4, 00:06:25.895 "pdu_pool_size": 36864, 00:06:25.895 "immediate_data_pool_size": 16384, 00:06:25.895 "data_out_pool_size": 2048 00:06:25.895 } 00:06:25.895 } 00:06:25.895 ] 00:06:25.895 }, 00:06:25.895 { 00:06:25.895 "subsystem": "vhost_scsi", 00:06:25.895 "config": [] 00:06:25.895 } 00:06:25.895 ] 00:06:25.895 } 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 476196 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 476196 ']' 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 476196 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476196 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476196' 00:06:25.895 killing process with pid 476196 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 476196 00:06:25.895 09:25:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 476196 00:06:26.463 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=476386 00:06:26.463 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:26.463 09:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 476386 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 476386 ']' 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 476386 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476386 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476386' 00:06:31.739 killing process with pid 476386 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 476386 00:06:31.739 09:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 476386 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:31.739 00:06:31.739 real 0m6.905s 00:06:31.739 user 0m6.679s 00:06:31.739 sys 0m0.692s 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.739 ************************************ 00:06:31.739 END TEST skip_rpc_with_json 00:06:31.739 ************************************ 00:06:31.739 09:25:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:31.739 09:25:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.739 09:25:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.739 09:25:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.739 ************************************ 00:06:31.739 START TEST skip_rpc_with_delay 00:06:31.739 ************************************ 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:31.739 [2024-10-07 09:25:27.241140] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:31.739 [2024-10-07 09:25:27.241254] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.739 00:06:31.739 real 0m0.043s 00:06:31.739 user 0m0.019s 00:06:31.739 sys 0m0.024s 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.739 09:25:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:31.739 ************************************ 00:06:31.739 END TEST skip_rpc_with_delay 00:06:31.739 ************************************ 00:06:31.739 09:25:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:31.740 09:25:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:31.998 09:25:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:31.998 09:25:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.998 09:25:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.998 09:25:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.998 ************************************ 00:06:31.998 START TEST exit_on_failed_rpc_init 00:06:31.998 ************************************ 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=477141 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 477141 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 477141 ']' 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.998 09:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.998 [2024-10-07 09:25:27.373809] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:31.998 [2024-10-07 09:25:27.373891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477141 ] 00:06:31.998 [2024-10-07 09:25:27.448713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.998 [2024-10-07 09:25:27.534752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.935 [2024-10-07 09:25:28.260317] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:32.935 [2024-10-07 09:25:28.260385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477319 ] 00:06:32.935 [2024-10-07 09:25:28.333764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.935 [2024-10-07 09:25:28.415673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.935 [2024-10-07 09:25:28.415760] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:32.935 [2024-10-07 09:25:28.415773] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:32.935 [2024-10-07 09:25:28.415781] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 477141 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 477141 ']' 00:06:32.935 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 477141 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477141 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477141' 00:06:33.211 killing process with pid 477141 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 477141 00:06:33.211 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 477141 00:06:33.469 00:06:33.469 real 0m1.572s 00:06:33.469 user 0m1.766s 00:06:33.469 sys 0m0.486s 00:06:33.469 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.469 09:25:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.469 ************************************ 00:06:33.469 END TEST exit_on_failed_rpc_init 00:06:33.469 ************************************ 00:06:33.469 09:25:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:33.469 00:06:33.469 real 0m14.435s 00:06:33.469 user 0m13.833s 00:06:33.469 sys 0m1.826s 00:06:33.469 09:25:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.469 09:25:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.469 ************************************ 00:06:33.469 END TEST skip_rpc 00:06:33.469 ************************************ 00:06:33.469 09:25:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:33.469 09:25:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.469 09:25:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.469 09:25:29 -- common/autotest_common.sh@10 -- # set +x 00:06:33.728 ************************************ 00:06:33.728 START TEST rpc_client 00:06:33.728 ************************************ 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:33.728 * Looking for test storage... 00:06:33.728 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.728 09:25:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.728 09:25:29 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.728 --rc genhtml_branch_coverage=1 00:06:33.728 --rc genhtml_function_coverage=1 00:06:33.728 --rc genhtml_legend=1 00:06:33.728 --rc geninfo_all_blocks=1 00:06:33.729 --rc geninfo_unexecuted_blocks=1 00:06:33.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.729 ' 00:06:33.729 09:25:29 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.729 --rc genhtml_branch_coverage=1 00:06:33.729 --rc genhtml_function_coverage=1 00:06:33.729 --rc genhtml_legend=1 00:06:33.729 --rc geninfo_all_blocks=1 00:06:33.729 --rc geninfo_unexecuted_blocks=1 00:06:33.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.729 ' 00:06:33.729 09:25:29 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.729 --rc genhtml_branch_coverage=1 00:06:33.729 --rc genhtml_function_coverage=1 00:06:33.729 --rc genhtml_legend=1 00:06:33.729 --rc geninfo_all_blocks=1 00:06:33.729 --rc geninfo_unexecuted_blocks=1 00:06:33.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.729 ' 00:06:33.729 09:25:29 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.729 --rc genhtml_branch_coverage=1 00:06:33.729 --rc genhtml_function_coverage=1 00:06:33.729 --rc genhtml_legend=1 00:06:33.729 --rc geninfo_all_blocks=1 00:06:33.729 --rc geninfo_unexecuted_blocks=1 00:06:33.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.729 ' 00:06:33.729 09:25:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:33.729 OK 00:06:33.729 09:25:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:33.729 00:06:33.729 real 0m0.208s 00:06:33.729 user 0m0.105s 00:06:33.729 sys 0m0.121s 00:06:33.729 09:25:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.729 09:25:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:33.729 ************************************ 00:06:33.729 END TEST rpc_client 00:06:33.729 ************************************ 00:06:33.988 09:25:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:33.988 09:25:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.988 09:25:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.988 09:25:29 -- common/autotest_common.sh@10 -- # set +x 00:06:33.988 ************************************ 00:06:33.988 START TEST json_config 00:06:33.988 ************************************ 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.988 09:25:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.988 09:25:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.988 09:25:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.988 09:25:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.988 09:25:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.988 09:25:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:33.988 09:25:29 json_config -- scripts/common.sh@345 -- # : 1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.988 09:25:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.988 09:25:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@353 -- # local d=1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.988 09:25:29 json_config -- scripts/common.sh@355 -- # echo 1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.988 09:25:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@353 -- # local d=2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.988 09:25:29 json_config -- scripts/common.sh@355 -- # echo 2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.988 09:25:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.988 09:25:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.988 09:25:29 json_config -- scripts/common.sh@368 -- # return 0 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.988 --rc genhtml_branch_coverage=1 00:06:33.988 --rc genhtml_function_coverage=1 00:06:33.988 --rc genhtml_legend=1 00:06:33.988 --rc geninfo_all_blocks=1 00:06:33.988 --rc geninfo_unexecuted_blocks=1 00:06:33.988 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.988 ' 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.988 --rc genhtml_branch_coverage=1 00:06:33.988 --rc genhtml_function_coverage=1 00:06:33.988 --rc genhtml_legend=1 00:06:33.988 --rc geninfo_all_blocks=1 00:06:33.988 --rc geninfo_unexecuted_blocks=1 00:06:33.988 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.988 ' 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.988 --rc genhtml_branch_coverage=1 00:06:33.988 --rc genhtml_function_coverage=1 00:06:33.988 --rc genhtml_legend=1 00:06:33.988 --rc geninfo_all_blocks=1 00:06:33.988 --rc geninfo_unexecuted_blocks=1 00:06:33.988 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.988 ' 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.988 --rc genhtml_branch_coverage=1 00:06:33.988 --rc genhtml_function_coverage=1 00:06:33.988 --rc genhtml_legend=1 00:06:33.988 --rc geninfo_all_blocks=1 00:06:33.988 --rc geninfo_unexecuted_blocks=1 00:06:33.988 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.988 ' 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:33.988 09:25:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.988 09:25:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.988 09:25:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.988 09:25:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.988 09:25:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.988 09:25:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.988 09:25:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.988 09:25:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:33.988 09:25:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@51 -- # : 0 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.988 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.988 09:25:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:33.988 WARNING: No tests are enabled so not running JSON configuration tests 00:06:33.988 09:25:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:33.988 00:06:33.988 real 0m0.175s 00:06:33.988 user 0m0.114s 00:06:33.988 sys 0m0.070s 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.988 09:25:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.988 ************************************ 00:06:33.988 END TEST json_config 00:06:33.988 ************************************ 00:06:33.988 09:25:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.988 09:25:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.988 09:25:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.988 09:25:29 -- common/autotest_common.sh@10 -- # set +x 00:06:34.248 ************************************ 00:06:34.248 START TEST json_config_extra_key 00:06:34.248 ************************************ 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.248 09:25:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.248 09:25:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.248 --rc genhtml_branch_coverage=1 00:06:34.248 --rc genhtml_function_coverage=1 00:06:34.248 --rc genhtml_legend=1 00:06:34.248 --rc geninfo_all_blocks=1 00:06:34.248 --rc geninfo_unexecuted_blocks=1 00:06:34.248 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.248 ' 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.249 --rc genhtml_branch_coverage=1 00:06:34.249 --rc genhtml_function_coverage=1 00:06:34.249 --rc genhtml_legend=1 00:06:34.249 --rc geninfo_all_blocks=1 00:06:34.249 --rc geninfo_unexecuted_blocks=1 00:06:34.249 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.249 ' 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.249 --rc genhtml_branch_coverage=1 00:06:34.249 --rc genhtml_function_coverage=1 00:06:34.249 --rc genhtml_legend=1 00:06:34.249 --rc geninfo_all_blocks=1 00:06:34.249 --rc geninfo_unexecuted_blocks=1 00:06:34.249 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.249 ' 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.249 --rc genhtml_branch_coverage=1 00:06:34.249 --rc genhtml_function_coverage=1 00:06:34.249 --rc genhtml_legend=1 00:06:34.249 --rc geninfo_all_blocks=1 00:06:34.249 --rc geninfo_unexecuted_blocks=1 00:06:34.249 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.249 ' 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:34.249 09:25:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.249 09:25:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.249 09:25:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.249 09:25:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.249 09:25:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.249 09:25:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.249 09:25:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.249 09:25:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:34.249 09:25:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.249 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.249 09:25:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:34.249 INFO: launching applications... 00:06:34.249 09:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=477664 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:34.249 Waiting for target to run... 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 477664 /var/tmp/spdk_tgt.sock 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 477664 ']' 00:06:34.249 09:25:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:34.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.249 09:25:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.508 [2024-10-07 09:25:29.814660] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:34.508 [2024-10-07 09:25:29.814733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477664 ] 00:06:34.766 [2024-10-07 09:25:30.273581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.025 [2024-10-07 09:25:30.360586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.288 09:25:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.288 09:25:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:35.288 00:06:35.288 09:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:35.288 INFO: shutting down applications... 00:06:35.288 09:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 477664 ]] 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 477664 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 477664 00:06:35.288 09:25:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.912 09:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.912 09:25:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.912 09:25:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 477664 00:06:35.913 09:25:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.913 09:25:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:35.913 09:25:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.913 09:25:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.913 SPDK target shutdown done 00:06:35.913 09:25:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:35.913 Success 00:06:35.913 00:06:35.913 real 0m1.600s 00:06:35.913 user 0m1.223s 00:06:35.913 sys 0m0.631s 00:06:35.913 09:25:31 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.913 09:25:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 END TEST json_config_extra_key 00:06:35.913 ************************************ 00:06:35.913 09:25:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.913 09:25:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.913 09:25:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.913 09:25:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 START TEST alias_rpc 00:06:35.913 ************************************ 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.913 * Looking for test storage... 00:06:35.913 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.913 09:25:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.913 --rc genhtml_branch_coverage=1 00:06:35.913 --rc genhtml_function_coverage=1 00:06:35.913 --rc genhtml_legend=1 00:06:35.913 --rc geninfo_all_blocks=1 00:06:35.913 --rc geninfo_unexecuted_blocks=1 00:06:35.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.913 ' 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.913 --rc genhtml_branch_coverage=1 00:06:35.913 --rc genhtml_function_coverage=1 00:06:35.913 --rc genhtml_legend=1 00:06:35.913 --rc geninfo_all_blocks=1 00:06:35.913 --rc geninfo_unexecuted_blocks=1 00:06:35.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.913 ' 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.913 --rc genhtml_branch_coverage=1 00:06:35.913 --rc genhtml_function_coverage=1 00:06:35.913 --rc genhtml_legend=1 00:06:35.913 --rc geninfo_all_blocks=1 00:06:35.913 --rc geninfo_unexecuted_blocks=1 00:06:35.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.913 ' 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.913 --rc genhtml_branch_coverage=1 00:06:35.913 --rc genhtml_function_coverage=1 00:06:35.913 --rc genhtml_legend=1 00:06:35.913 --rc geninfo_all_blocks=1 00:06:35.913 --rc geninfo_unexecuted_blocks=1 00:06:35.913 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:35.913 ' 00:06:35.913 09:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.913 09:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=477905 00:06:35.913 09:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 477905 00:06:35.913 09:25:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 477905 ']' 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.913 09:25:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 [2024-10-07 09:25:31.457004] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:35.914 [2024-10-07 09:25:31.457102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477905 ] 00:06:36.173 [2024-10-07 09:25:31.532419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.173 [2024-10-07 09:25:31.616429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.112 09:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:37.112 09:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 477905 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 477905 ']' 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 477905 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477905 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477905' 00:06:37.112 killing process with pid 477905 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 477905 00:06:37.112 09:25:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 477905 00:06:37.372 00:06:37.372 real 0m1.630s 00:06:37.372 user 0m1.730s 00:06:37.372 sys 0m0.499s 00:06:37.372 09:25:32 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.372 09:25:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.372 ************************************ 00:06:37.372 END TEST alias_rpc 00:06:37.372 ************************************ 00:06:37.631 09:25:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:37.631 09:25:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:37.631 09:25:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.631 09:25:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.631 09:25:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.631 ************************************ 00:06:37.631 START TEST spdkcli_tcp 00:06:37.631 ************************************ 00:06:37.631 09:25:32 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:37.631 * Looking for test storage... 00:06:37.631 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.631 09:25:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.631 --rc genhtml_branch_coverage=1 00:06:37.631 --rc genhtml_function_coverage=1 00:06:37.631 --rc genhtml_legend=1 00:06:37.631 --rc geninfo_all_blocks=1 00:06:37.631 --rc geninfo_unexecuted_blocks=1 00:06:37.631 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.631 ' 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.631 --rc genhtml_branch_coverage=1 00:06:37.631 --rc genhtml_function_coverage=1 00:06:37.631 --rc genhtml_legend=1 00:06:37.631 --rc geninfo_all_blocks=1 00:06:37.631 --rc geninfo_unexecuted_blocks=1 00:06:37.631 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.631 ' 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.631 --rc genhtml_branch_coverage=1 00:06:37.631 --rc genhtml_function_coverage=1 00:06:37.631 --rc genhtml_legend=1 00:06:37.631 --rc geninfo_all_blocks=1 00:06:37.631 --rc geninfo_unexecuted_blocks=1 00:06:37.631 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.631 ' 00:06:37.631 09:25:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.631 --rc genhtml_branch_coverage=1 00:06:37.631 --rc genhtml_function_coverage=1 00:06:37.631 --rc genhtml_legend=1 00:06:37.631 --rc geninfo_all_blocks=1 00:06:37.631 --rc geninfo_unexecuted_blocks=1 00:06:37.631 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:37.631 ' 00:06:37.631 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=478295 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 478295 00:06:37.632 09:25:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 478295 ']' 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.632 09:25:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.890 [2024-10-07 09:25:33.209993] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:37.890 [2024-10-07 09:25:33.210078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478295 ] 00:06:37.891 [2024-10-07 09:25:33.284100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.891 [2024-10-07 09:25:33.365127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.891 [2024-10-07 09:25:33.365129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.828 09:25:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.828 09:25:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:38.828 09:25:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=478329 00:06:38.828 09:25:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:38.828 09:25:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:38.828 [ 00:06:38.828 "spdk_get_version", 00:06:38.828 "rpc_get_methods", 00:06:38.828 "notify_get_notifications", 00:06:38.828 "notify_get_types", 00:06:38.828 "trace_get_info", 00:06:38.828 "trace_get_tpoint_group_mask", 00:06:38.828 "trace_disable_tpoint_group", 00:06:38.828 "trace_enable_tpoint_group", 00:06:38.828 "trace_clear_tpoint_mask", 00:06:38.828 "trace_set_tpoint_mask", 00:06:38.828 "fsdev_set_opts", 00:06:38.828 "fsdev_get_opts", 00:06:38.828 "framework_get_pci_devices", 00:06:38.828 "framework_get_config", 00:06:38.828 "framework_get_subsystems", 00:06:38.828 "vfu_tgt_set_base_path", 00:06:38.828 "keyring_get_keys", 00:06:38.828 "iobuf_get_stats", 00:06:38.828 "iobuf_set_options", 00:06:38.828 "sock_get_default_impl", 00:06:38.828 "sock_set_default_impl", 00:06:38.828 "sock_impl_set_options", 00:06:38.828 "sock_impl_get_options", 00:06:38.828 "vmd_rescan", 00:06:38.828 "vmd_remove_device", 00:06:38.828 "vmd_enable", 00:06:38.828 "accel_get_stats", 00:06:38.828 "accel_set_options", 00:06:38.828 "accel_set_driver", 00:06:38.828 "accel_crypto_key_destroy", 00:06:38.828 "accel_crypto_keys_get", 00:06:38.828 "accel_crypto_key_create", 00:06:38.828 "accel_assign_opc", 00:06:38.828 "accel_get_module_info", 00:06:38.828 "accel_get_opc_assignments", 00:06:38.828 "bdev_get_histogram", 00:06:38.828 "bdev_enable_histogram", 00:06:38.828 "bdev_set_qos_limit", 00:06:38.828 "bdev_set_qd_sampling_period", 00:06:38.828 "bdev_get_bdevs", 00:06:38.828 "bdev_reset_iostat", 00:06:38.828 "bdev_get_iostat", 00:06:38.828 "bdev_examine", 00:06:38.828 "bdev_wait_for_examine", 00:06:38.828 "bdev_set_options", 00:06:38.828 "scsi_get_devices", 00:06:38.828 "thread_set_cpumask", 00:06:38.828 "scheduler_set_options", 00:06:38.828 "framework_get_governor", 00:06:38.828 "framework_get_scheduler", 00:06:38.829 "framework_set_scheduler", 00:06:38.829 "framework_get_reactors", 00:06:38.829 "thread_get_io_channels", 00:06:38.829 "thread_get_pollers", 00:06:38.829 "thread_get_stats", 00:06:38.829 "framework_monitor_context_switch", 00:06:38.829 "spdk_kill_instance", 00:06:38.829 "log_enable_timestamps", 00:06:38.829 "log_get_flags", 00:06:38.829 "log_clear_flag", 00:06:38.829 "log_set_flag", 00:06:38.829 "log_get_level", 00:06:38.829 "log_set_level", 00:06:38.829 "log_get_print_level", 00:06:38.829 "log_set_print_level", 00:06:38.829 "framework_enable_cpumask_locks", 00:06:38.829 "framework_disable_cpumask_locks", 00:06:38.829 "framework_wait_init", 00:06:38.829 "framework_start_init", 00:06:38.829 "virtio_blk_create_transport", 00:06:38.829 "virtio_blk_get_transports", 00:06:38.829 "vhost_controller_set_coalescing", 00:06:38.829 "vhost_get_controllers", 00:06:38.829 "vhost_delete_controller", 00:06:38.829 "vhost_create_blk_controller", 00:06:38.829 "vhost_scsi_controller_remove_target", 00:06:38.829 "vhost_scsi_controller_add_target", 00:06:38.829 "vhost_start_scsi_controller", 00:06:38.829 "vhost_create_scsi_controller", 00:06:38.829 "ublk_recover_disk", 00:06:38.829 "ublk_get_disks", 00:06:38.829 "ublk_stop_disk", 00:06:38.829 "ublk_start_disk", 00:06:38.829 "ublk_destroy_target", 00:06:38.829 "ublk_create_target", 00:06:38.829 "nbd_get_disks", 00:06:38.829 "nbd_stop_disk", 00:06:38.829 "nbd_start_disk", 00:06:38.829 "env_dpdk_get_mem_stats", 00:06:38.829 "nvmf_stop_mdns_prr", 00:06:38.829 "nvmf_publish_mdns_prr", 00:06:38.829 "nvmf_subsystem_get_listeners", 00:06:38.829 "nvmf_subsystem_get_qpairs", 00:06:38.829 "nvmf_subsystem_get_controllers", 00:06:38.829 "nvmf_get_stats", 00:06:38.829 "nvmf_get_transports", 00:06:38.829 "nvmf_create_transport", 00:06:38.829 "nvmf_get_targets", 00:06:38.829 "nvmf_delete_target", 00:06:38.829 "nvmf_create_target", 00:06:38.829 "nvmf_subsystem_allow_any_host", 00:06:38.829 "nvmf_subsystem_set_keys", 00:06:38.829 "nvmf_subsystem_remove_host", 00:06:38.829 "nvmf_subsystem_add_host", 00:06:38.829 "nvmf_ns_remove_host", 00:06:38.829 "nvmf_ns_add_host", 00:06:38.829 "nvmf_subsystem_remove_ns", 00:06:38.829 "nvmf_subsystem_set_ns_ana_group", 00:06:38.829 "nvmf_subsystem_add_ns", 00:06:38.829 "nvmf_subsystem_listener_set_ana_state", 00:06:38.829 "nvmf_discovery_get_referrals", 00:06:38.829 "nvmf_discovery_remove_referral", 00:06:38.829 "nvmf_discovery_add_referral", 00:06:38.829 "nvmf_subsystem_remove_listener", 00:06:38.829 "nvmf_subsystem_add_listener", 00:06:38.829 "nvmf_delete_subsystem", 00:06:38.829 "nvmf_create_subsystem", 00:06:38.829 "nvmf_get_subsystems", 00:06:38.829 "nvmf_set_crdt", 00:06:38.829 "nvmf_set_config", 00:06:38.829 "nvmf_set_max_subsystems", 00:06:38.829 "iscsi_get_histogram", 00:06:38.829 "iscsi_enable_histogram", 00:06:38.829 "iscsi_set_options", 00:06:38.829 "iscsi_get_auth_groups", 00:06:38.829 "iscsi_auth_group_remove_secret", 00:06:38.829 "iscsi_auth_group_add_secret", 00:06:38.829 "iscsi_delete_auth_group", 00:06:38.829 "iscsi_create_auth_group", 00:06:38.829 "iscsi_set_discovery_auth", 00:06:38.829 "iscsi_get_options", 00:06:38.829 "iscsi_target_node_request_logout", 00:06:38.829 "iscsi_target_node_set_redirect", 00:06:38.829 "iscsi_target_node_set_auth", 00:06:38.829 "iscsi_target_node_add_lun", 00:06:38.829 "iscsi_get_stats", 00:06:38.829 "iscsi_get_connections", 00:06:38.829 "iscsi_portal_group_set_auth", 00:06:38.829 "iscsi_start_portal_group", 00:06:38.829 "iscsi_delete_portal_group", 00:06:38.829 "iscsi_create_portal_group", 00:06:38.829 "iscsi_get_portal_groups", 00:06:38.829 "iscsi_delete_target_node", 00:06:38.829 "iscsi_target_node_remove_pg_ig_maps", 00:06:38.829 "iscsi_target_node_add_pg_ig_maps", 00:06:38.829 "iscsi_create_target_node", 00:06:38.829 "iscsi_get_target_nodes", 00:06:38.829 "iscsi_delete_initiator_group", 00:06:38.829 "iscsi_initiator_group_remove_initiators", 00:06:38.829 "iscsi_initiator_group_add_initiators", 00:06:38.829 "iscsi_create_initiator_group", 00:06:38.829 "iscsi_get_initiator_groups", 00:06:38.829 "fsdev_aio_delete", 00:06:38.829 "fsdev_aio_create", 00:06:38.829 "keyring_linux_set_options", 00:06:38.829 "keyring_file_remove_key", 00:06:38.829 "keyring_file_add_key", 00:06:38.829 "vfu_virtio_create_fs_endpoint", 00:06:38.829 "vfu_virtio_create_scsi_endpoint", 00:06:38.829 "vfu_virtio_scsi_remove_target", 00:06:38.829 "vfu_virtio_scsi_add_target", 00:06:38.829 "vfu_virtio_create_blk_endpoint", 00:06:38.829 "vfu_virtio_delete_endpoint", 00:06:38.829 "iaa_scan_accel_module", 00:06:38.829 "dsa_scan_accel_module", 00:06:38.829 "ioat_scan_accel_module", 00:06:38.829 "accel_error_inject_error", 00:06:38.829 "bdev_iscsi_delete", 00:06:38.829 "bdev_iscsi_create", 00:06:38.829 "bdev_iscsi_set_options", 00:06:38.829 "bdev_virtio_attach_controller", 00:06:38.829 "bdev_virtio_scsi_get_devices", 00:06:38.829 "bdev_virtio_detach_controller", 00:06:38.829 "bdev_virtio_blk_set_hotplug", 00:06:38.829 "bdev_ftl_set_property", 00:06:38.829 "bdev_ftl_get_properties", 00:06:38.829 "bdev_ftl_get_stats", 00:06:38.829 "bdev_ftl_unmap", 00:06:38.829 "bdev_ftl_unload", 00:06:38.829 "bdev_ftl_delete", 00:06:38.829 "bdev_ftl_load", 00:06:38.829 "bdev_ftl_create", 00:06:38.829 "bdev_aio_delete", 00:06:38.829 "bdev_aio_rescan", 00:06:38.829 "bdev_aio_create", 00:06:38.829 "blobfs_create", 00:06:38.829 "blobfs_detect", 00:06:38.829 "blobfs_set_cache_size", 00:06:38.829 "bdev_zone_block_delete", 00:06:38.829 "bdev_zone_block_create", 00:06:38.829 "bdev_delay_delete", 00:06:38.829 "bdev_delay_create", 00:06:38.829 "bdev_delay_update_latency", 00:06:38.829 "bdev_split_delete", 00:06:38.829 "bdev_split_create", 00:06:38.829 "bdev_error_inject_error", 00:06:38.829 "bdev_error_delete", 00:06:38.829 "bdev_error_create", 00:06:38.829 "bdev_raid_set_options", 00:06:38.829 "bdev_raid_remove_base_bdev", 00:06:38.829 "bdev_raid_add_base_bdev", 00:06:38.829 "bdev_raid_delete", 00:06:38.829 "bdev_raid_create", 00:06:38.829 "bdev_raid_get_bdevs", 00:06:38.829 "bdev_lvol_set_parent_bdev", 00:06:38.829 "bdev_lvol_set_parent", 00:06:38.829 "bdev_lvol_check_shallow_copy", 00:06:38.829 "bdev_lvol_start_shallow_copy", 00:06:38.829 "bdev_lvol_grow_lvstore", 00:06:38.829 "bdev_lvol_get_lvols", 00:06:38.829 "bdev_lvol_get_lvstores", 00:06:38.829 "bdev_lvol_delete", 00:06:38.829 "bdev_lvol_set_read_only", 00:06:38.829 "bdev_lvol_resize", 00:06:38.829 "bdev_lvol_decouple_parent", 00:06:38.829 "bdev_lvol_inflate", 00:06:38.829 "bdev_lvol_rename", 00:06:38.829 "bdev_lvol_clone_bdev", 00:06:38.829 "bdev_lvol_clone", 00:06:38.829 "bdev_lvol_snapshot", 00:06:38.829 "bdev_lvol_create", 00:06:38.829 "bdev_lvol_delete_lvstore", 00:06:38.829 "bdev_lvol_rename_lvstore", 00:06:38.829 "bdev_lvol_create_lvstore", 00:06:38.829 "bdev_passthru_delete", 00:06:38.829 "bdev_passthru_create", 00:06:38.829 "bdev_nvme_cuse_unregister", 00:06:38.829 "bdev_nvme_cuse_register", 00:06:38.829 "bdev_opal_new_user", 00:06:38.829 "bdev_opal_set_lock_state", 00:06:38.829 "bdev_opal_delete", 00:06:38.829 "bdev_opal_get_info", 00:06:38.829 "bdev_opal_create", 00:06:38.829 "bdev_nvme_opal_revert", 00:06:38.829 "bdev_nvme_opal_init", 00:06:38.829 "bdev_nvme_send_cmd", 00:06:38.829 "bdev_nvme_set_keys", 00:06:38.829 "bdev_nvme_get_path_iostat", 00:06:38.829 "bdev_nvme_get_mdns_discovery_info", 00:06:38.829 "bdev_nvme_stop_mdns_discovery", 00:06:38.829 "bdev_nvme_start_mdns_discovery", 00:06:38.829 "bdev_nvme_set_multipath_policy", 00:06:38.829 "bdev_nvme_set_preferred_path", 00:06:38.829 "bdev_nvme_get_io_paths", 00:06:38.829 "bdev_nvme_remove_error_injection", 00:06:38.829 "bdev_nvme_add_error_injection", 00:06:38.829 "bdev_nvme_get_discovery_info", 00:06:38.829 "bdev_nvme_stop_discovery", 00:06:38.829 "bdev_nvme_start_discovery", 00:06:38.829 "bdev_nvme_get_controller_health_info", 00:06:38.829 "bdev_nvme_disable_controller", 00:06:38.829 "bdev_nvme_enable_controller", 00:06:38.829 "bdev_nvme_reset_controller", 00:06:38.829 "bdev_nvme_get_transport_statistics", 00:06:38.829 "bdev_nvme_apply_firmware", 00:06:38.829 "bdev_nvme_detach_controller", 00:06:38.829 "bdev_nvme_get_controllers", 00:06:38.829 "bdev_nvme_attach_controller", 00:06:38.829 "bdev_nvme_set_hotplug", 00:06:38.829 "bdev_nvme_set_options", 00:06:38.829 "bdev_null_resize", 00:06:38.829 "bdev_null_delete", 00:06:38.829 "bdev_null_create", 00:06:38.829 "bdev_malloc_delete", 00:06:38.829 "bdev_malloc_create" 00:06:38.829 ] 00:06:38.829 09:25:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.829 09:25:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:38.829 09:25:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 478295 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 478295 ']' 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 478295 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.829 09:25:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478295 00:06:38.830 09:25:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.830 09:25:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.830 09:25:34 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478295' 00:06:38.830 killing process with pid 478295 00:06:38.830 09:25:34 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 478295 00:06:38.830 09:25:34 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 478295 00:06:39.399 00:06:39.399 real 0m1.743s 00:06:39.399 user 0m3.142s 00:06:39.399 sys 0m0.550s 00:06:39.399 09:25:34 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.399 09:25:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.399 ************************************ 00:06:39.399 END TEST spdkcli_tcp 00:06:39.399 ************************************ 00:06:39.399 09:25:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.399 09:25:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.399 09:25:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.399 09:25:34 -- common/autotest_common.sh@10 -- # set +x 00:06:39.399 ************************************ 00:06:39.399 START TEST dpdk_mem_utility 00:06:39.399 ************************************ 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.399 * Looking for test storage... 00:06:39.399 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.399 09:25:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.399 --rc genhtml_branch_coverage=1 00:06:39.399 --rc genhtml_function_coverage=1 00:06:39.399 --rc genhtml_legend=1 00:06:39.399 --rc geninfo_all_blocks=1 00:06:39.399 --rc geninfo_unexecuted_blocks=1 00:06:39.399 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.399 ' 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.399 --rc genhtml_branch_coverage=1 00:06:39.399 --rc genhtml_function_coverage=1 00:06:39.399 --rc genhtml_legend=1 00:06:39.399 --rc geninfo_all_blocks=1 00:06:39.399 --rc geninfo_unexecuted_blocks=1 00:06:39.399 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.399 ' 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.399 --rc genhtml_branch_coverage=1 00:06:39.399 --rc genhtml_function_coverage=1 00:06:39.399 --rc genhtml_legend=1 00:06:39.399 --rc geninfo_all_blocks=1 00:06:39.399 --rc geninfo_unexecuted_blocks=1 00:06:39.399 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.399 ' 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.399 --rc genhtml_branch_coverage=1 00:06:39.399 --rc genhtml_function_coverage=1 00:06:39.399 --rc genhtml_legend=1 00:06:39.399 --rc geninfo_all_blocks=1 00:06:39.399 --rc geninfo_unexecuted_blocks=1 00:06:39.399 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.399 ' 00:06:39.399 09:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:39.399 09:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=478560 00:06:39.399 09:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 478560 00:06:39.399 09:25:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 478560 ']' 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.399 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.400 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.400 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.400 09:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:39.659 [2024-10-07 09:25:34.980633] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:39.659 [2024-10-07 09:25:34.980713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478560 ] 00:06:39.659 [2024-10-07 09:25:35.055684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.659 [2024-10-07 09:25:35.142161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.597 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.597 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:40.598 09:25:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:40.598 09:25:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:40.598 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.598 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.598 { 00:06:40.598 "filename": "/tmp/spdk_mem_dump.txt" 00:06:40.598 } 00:06:40.598 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.598 09:25:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.598 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:40.598 1 heaps totaling size 860.000000 MiB 00:06:40.598 size: 860.000000 MiB heap id: 0 00:06:40.598 end heaps---------- 00:06:40.598 9 mempools totaling size 642.649841 MiB 00:06:40.598 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:40.598 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:40.598 size: 92.545471 MiB name: bdev_io_478560 00:06:40.598 size: 51.011292 MiB name: evtpool_478560 00:06:40.598 size: 50.003479 MiB name: msgpool_478560 00:06:40.598 size: 36.509338 MiB name: fsdev_io_478560 00:06:40.598 size: 21.763794 MiB name: PDU_Pool 00:06:40.598 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:40.598 size: 0.026123 MiB name: Session_Pool 00:06:40.598 end mempools------- 00:06:40.598 6 memzones totaling size 4.142822 MiB 00:06:40.598 size: 1.000366 MiB name: RG_ring_0_478560 00:06:40.598 size: 1.000366 MiB name: RG_ring_1_478560 00:06:40.598 size: 1.000366 MiB name: RG_ring_4_478560 00:06:40.598 size: 1.000366 MiB name: RG_ring_5_478560 00:06:40.598 size: 0.125366 MiB name: RG_ring_2_478560 00:06:40.598 size: 0.015991 MiB name: RG_ring_3_478560 00:06:40.598 end memzones------- 00:06:40.598 09:25:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:40.598 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:40.598 list of free elements. size: 13.984680 MiB 00:06:40.598 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:40.598 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:40.598 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:40.598 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:40.598 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:40.598 element at address: 0x20000b200000 with size: 0.959839 MiB 00:06:40.598 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:40.598 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:40.598 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:40.598 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:40.598 element at address: 0x200003e00000 with size: 0.495422 MiB 00:06:40.598 element at address: 0x200007000000 with size: 0.490723 MiB 00:06:40.598 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:40.598 element at address: 0x200013800000 with size: 0.481934 MiB 00:06:40.598 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:40.598 element at address: 0x200003a00000 with size: 0.355042 MiB 00:06:40.598 list of standard malloc elements. size: 199.218628 MiB 00:06:40.598 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:40.598 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:40.598 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:40.598 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:40.598 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:40.598 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:40.598 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:40.598 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:40.598 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:40.598 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003a5b100 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003adb3c0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003adb5c0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003adf880 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003eff000 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20000707da00 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20000707dac0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001387b600 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001387b6c0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x2000138fb980 with size: 0.000183 MiB 00:06:40.598 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:40.598 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:40.598 list of memzone associated elements. size: 646.796692 MiB 00:06:40.598 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:40.598 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:40.598 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:40.598 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:40.598 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:40.598 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_478560_0 00:06:40.598 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:40.598 associated memzone info: size: 48.002930 MiB name: MP_evtpool_478560_0 00:06:40.598 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:40.598 associated memzone info: size: 48.002930 MiB name: MP_msgpool_478560_0 00:06:40.598 element at address: 0x2000139fdb80 with size: 36.008911 MiB 00:06:40.598 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_478560_0 00:06:40.598 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:40.598 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:40.598 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:40.598 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:40.598 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:40.598 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_478560 00:06:40.598 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:40.598 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_478560 00:06:40.598 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:40.598 associated memzone info: size: 1.007996 MiB name: MP_evtpool_478560 00:06:40.598 element at address: 0x2000138fba40 with size: 1.008118 MiB 00:06:40.598 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:40.598 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:40.598 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:40.598 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:40.598 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:40.598 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:40.598 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:40.598 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:40.598 associated memzone info: size: 1.000366 MiB name: RG_ring_0_478560 00:06:40.598 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:40.598 associated memzone info: size: 1.000366 MiB name: RG_ring_1_478560 00:06:40.598 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:40.598 associated memzone info: size: 1.000366 MiB name: RG_ring_4_478560 00:06:40.598 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:40.598 associated memzone info: size: 1.000366 MiB name: RG_ring_5_478560 00:06:40.598 element at address: 0x200003a5b1c0 with size: 0.500488 MiB 00:06:40.598 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_478560 00:06:40.598 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:06:40.598 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_478560 00:06:40.598 element at address: 0x20001387b780 with size: 0.500488 MiB 00:06:40.598 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:40.598 element at address: 0x20000707db80 with size: 0.500488 MiB 00:06:40.598 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:40.598 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:40.598 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:40.598 element at address: 0x200003adf940 with size: 0.125488 MiB 00:06:40.598 associated memzone info: size: 0.125366 MiB name: RG_ring_2_478560 00:06:40.599 element at address: 0x20000b2f5b80 with size: 0.031738 MiB 00:06:40.599 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:40.599 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:40.599 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:40.599 element at address: 0x200003adb680 with size: 0.016113 MiB 00:06:40.599 associated memzone info: size: 0.015991 MiB name: RG_ring_3_478560 00:06:40.599 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:40.599 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:40.599 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:40.599 associated memzone info: size: 0.000183 MiB name: MP_msgpool_478560 00:06:40.599 element at address: 0x200003adb480 with size: 0.000305 MiB 00:06:40.599 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_478560 00:06:40.599 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:06:40.599 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_478560 00:06:40.599 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:40.599 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:40.599 09:25:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:40.599 09:25:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 478560 00:06:40.599 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 478560 ']' 00:06:40.599 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 478560 00:06:40.599 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:40.599 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.599 09:25:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478560 00:06:40.599 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.599 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.599 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478560' 00:06:40.599 killing process with pid 478560 00:06:40.599 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 478560 00:06:40.599 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 478560 00:06:40.859 00:06:40.859 real 0m1.593s 00:06:40.859 user 0m1.658s 00:06:40.859 sys 0m0.488s 00:06:40.859 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.859 09:25:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.859 ************************************ 00:06:40.859 END TEST dpdk_mem_utility 00:06:40.859 ************************************ 00:06:41.119 09:25:36 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:41.119 09:25:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.119 09:25:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.119 09:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.119 ************************************ 00:06:41.119 START TEST event 00:06:41.119 ************************************ 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:41.119 * Looking for test storage... 00:06:41.119 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.119 09:25:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.119 09:25:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.119 09:25:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.119 09:25:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.119 09:25:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.119 09:25:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.119 09:25:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.119 09:25:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.119 09:25:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.119 09:25:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.119 09:25:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.119 09:25:36 event -- scripts/common.sh@344 -- # case "$op" in 00:06:41.119 09:25:36 event -- scripts/common.sh@345 -- # : 1 00:06:41.119 09:25:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.119 09:25:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.119 09:25:36 event -- scripts/common.sh@365 -- # decimal 1 00:06:41.119 09:25:36 event -- scripts/common.sh@353 -- # local d=1 00:06:41.119 09:25:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.119 09:25:36 event -- scripts/common.sh@355 -- # echo 1 00:06:41.119 09:25:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.119 09:25:36 event -- scripts/common.sh@366 -- # decimal 2 00:06:41.119 09:25:36 event -- scripts/common.sh@353 -- # local d=2 00:06:41.119 09:25:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.119 09:25:36 event -- scripts/common.sh@355 -- # echo 2 00:06:41.119 09:25:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.119 09:25:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.119 09:25:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.119 09:25:36 event -- scripts/common.sh@368 -- # return 0 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.119 --rc genhtml_branch_coverage=1 00:06:41.119 --rc genhtml_function_coverage=1 00:06:41.119 --rc genhtml_legend=1 00:06:41.119 --rc geninfo_all_blocks=1 00:06:41.119 --rc geninfo_unexecuted_blocks=1 00:06:41.119 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.119 ' 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.119 --rc genhtml_branch_coverage=1 00:06:41.119 --rc genhtml_function_coverage=1 00:06:41.119 --rc genhtml_legend=1 00:06:41.119 --rc geninfo_all_blocks=1 00:06:41.119 --rc geninfo_unexecuted_blocks=1 00:06:41.119 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.119 ' 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.119 --rc genhtml_branch_coverage=1 00:06:41.119 --rc genhtml_function_coverage=1 00:06:41.119 --rc genhtml_legend=1 00:06:41.119 --rc geninfo_all_blocks=1 00:06:41.119 --rc geninfo_unexecuted_blocks=1 00:06:41.119 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.119 ' 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.119 --rc genhtml_branch_coverage=1 00:06:41.119 --rc genhtml_function_coverage=1 00:06:41.119 --rc genhtml_legend=1 00:06:41.119 --rc geninfo_all_blocks=1 00:06:41.119 --rc geninfo_unexecuted_blocks=1 00:06:41.119 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:41.119 ' 00:06:41.119 09:25:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:41.119 09:25:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.119 09:25:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:41.119 09:25:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.119 09:25:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.119 ************************************ 00:06:41.119 START TEST event_perf 00:06:41.119 ************************************ 00:06:41.119 09:25:36 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.379 Running I/O for 1 seconds...[2024-10-07 09:25:36.690371] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:41.379 [2024-10-07 09:25:36.690455] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478814 ] 00:06:41.379 [2024-10-07 09:25:36.765452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.379 [2024-10-07 09:25:36.850281] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.379 [2024-10-07 09:25:36.850370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.379 [2024-10-07 09:25:36.850447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.379 [2024-10-07 09:25:36.850445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.759 Running I/O for 1 seconds... 00:06:42.759 lcore 0: 193419 00:06:42.759 lcore 1: 193418 00:06:42.759 lcore 2: 193419 00:06:42.759 lcore 3: 193419 00:06:42.759 done. 00:06:42.759 00:06:42.759 real 0m1.255s 00:06:42.759 user 0m4.151s 00:06:42.759 sys 0m0.099s 00:06:42.759 09:25:37 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.759 09:25:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.759 ************************************ 00:06:42.759 END TEST event_perf 00:06:42.759 ************************************ 00:06:42.759 09:25:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:42.759 09:25:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:42.759 09:25:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.759 09:25:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.759 ************************************ 00:06:42.759 START TEST event_reactor 00:06:42.759 ************************************ 00:06:42.759 09:25:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:42.760 [2024-10-07 09:25:38.027736] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:42.760 [2024-10-07 09:25:38.027840] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479009 ] 00:06:42.760 [2024-10-07 09:25:38.106207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.760 [2024-10-07 09:25:38.198897] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.140 test_start 00:06:44.140 oneshot 00:06:44.140 tick 100 00:06:44.140 tick 100 00:06:44.140 tick 250 00:06:44.140 tick 100 00:06:44.140 tick 100 00:06:44.140 tick 100 00:06:44.140 tick 250 00:06:44.140 tick 500 00:06:44.140 tick 100 00:06:44.140 tick 100 00:06:44.140 tick 250 00:06:44.140 tick 100 00:06:44.140 tick 100 00:06:44.140 test_end 00:06:44.140 00:06:44.140 real 0m1.266s 00:06:44.140 user 0m1.160s 00:06:44.140 sys 0m0.102s 00:06:44.140 09:25:39 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.140 09:25:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.140 ************************************ 00:06:44.140 END TEST event_reactor 00:06:44.140 ************************************ 00:06:44.140 09:25:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.140 09:25:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:44.140 09:25:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.140 09:25:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.140 ************************************ 00:06:44.140 START TEST event_reactor_perf 00:06:44.140 ************************************ 00:06:44.140 09:25:39 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.140 [2024-10-07 09:25:39.379664] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:44.140 [2024-10-07 09:25:39.379749] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479209 ] 00:06:44.140 [2024-10-07 09:25:39.458294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.140 [2024-10-07 09:25:39.546374] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.077 test_start 00:06:45.077 test_end 00:06:45.077 Performance: 954748 events per second 00:06:45.077 00:06:45.077 real 0m1.262s 00:06:45.077 user 0m1.150s 00:06:45.077 sys 0m0.106s 00:06:45.077 09:25:40 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.077 09:25:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.077 ************************************ 00:06:45.077 END TEST event_reactor_perf 00:06:45.077 ************************************ 00:06:45.337 09:25:40 event -- event/event.sh@49 -- # uname -s 00:06:45.337 09:25:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.337 09:25:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.337 09:25:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.337 09:25:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.337 09:25:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.337 ************************************ 00:06:45.337 START TEST event_scheduler 00:06:45.337 ************************************ 00:06:45.337 09:25:40 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.337 * Looking for test storage... 00:06:45.337 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:45.337 09:25:40 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.337 09:25:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.337 09:25:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.337 09:25:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.337 09:25:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.338 09:25:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.597 09:25:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.597 --rc genhtml_branch_coverage=1 00:06:45.597 --rc genhtml_function_coverage=1 00:06:45.597 --rc genhtml_legend=1 00:06:45.597 --rc geninfo_all_blocks=1 00:06:45.597 --rc geninfo_unexecuted_blocks=1 00:06:45.597 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.597 ' 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.597 --rc genhtml_branch_coverage=1 00:06:45.597 --rc genhtml_function_coverage=1 00:06:45.597 --rc genhtml_legend=1 00:06:45.597 --rc geninfo_all_blocks=1 00:06:45.597 --rc geninfo_unexecuted_blocks=1 00:06:45.597 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.597 ' 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.597 --rc genhtml_branch_coverage=1 00:06:45.597 --rc genhtml_function_coverage=1 00:06:45.597 --rc genhtml_legend=1 00:06:45.597 --rc geninfo_all_blocks=1 00:06:45.597 --rc geninfo_unexecuted_blocks=1 00:06:45.597 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.597 ' 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.597 --rc genhtml_branch_coverage=1 00:06:45.597 --rc genhtml_function_coverage=1 00:06:45.597 --rc genhtml_legend=1 00:06:45.597 --rc geninfo_all_blocks=1 00:06:45.597 --rc geninfo_unexecuted_blocks=1 00:06:45.597 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:45.597 ' 00:06:45.597 09:25:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.597 09:25:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=479438 00:06:45.597 09:25:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.597 09:25:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.597 09:25:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 479438 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 479438 ']' 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.597 09:25:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.597 [2024-10-07 09:25:40.917726] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:45.597 [2024-10-07 09:25:40.917784] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479438 ] 00:06:45.597 [2024-10-07 09:25:40.986716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.597 [2024-10-07 09:25:41.078177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.597 [2024-10-07 09:25:41.078253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.597 [2024-10-07 09:25:41.078333] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.597 [2024-10-07 09:25:41.078334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:45.597 09:25:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.597 [2024-10-07 09:25:41.110923] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:45.597 [2024-10-07 09:25:41.110944] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:45.597 [2024-10-07 09:25:41.110955] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:45.597 [2024-10-07 09:25:41.110963] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:45.597 [2024-10-07 09:25:41.110970] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.597 09:25:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.597 09:25:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 [2024-10-07 09:25:41.184158] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:45.857 09:25:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:45.857 09:25:41 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.857 09:25:41 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 ************************************ 00:06:45.857 START TEST scheduler_create_thread 00:06:45.857 ************************************ 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 2 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 3 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 4 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 5 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 6 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 7 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 8 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 9 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 10 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.857 09:25:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.796 09:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.796 09:25:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.796 09:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.796 09:25:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.177 09:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.177 09:25:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.177 09:25:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.177 09:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.177 09:25:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.115 09:25:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.115 00:06:49.115 real 0m3.381s 00:06:49.115 user 0m0.023s 00:06:49.115 sys 0m0.007s 00:06:49.115 09:25:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.115 09:25:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.115 ************************************ 00:06:49.115 END TEST scheduler_create_thread 00:06:49.115 ************************************ 00:06:49.115 09:25:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.115 09:25:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 479438 00:06:49.115 09:25:44 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 479438 ']' 00:06:49.115 09:25:44 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 479438 00:06:49.115 09:25:44 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:49.115 09:25:44 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.115 09:25:44 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479438 00:06:49.374 09:25:44 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:49.374 09:25:44 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:49.374 09:25:44 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479438' 00:06:49.374 killing process with pid 479438 00:06:49.374 09:25:44 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 479438 00:06:49.374 09:25:44 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 479438 00:06:49.632 [2024-10-07 09:25:44.984134] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:49.894 00:06:49.894 real 0m4.516s 00:06:49.894 user 0m7.784s 00:06:49.894 sys 0m0.445s 00:06:49.894 09:25:45 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.895 09:25:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.895 ************************************ 00:06:49.895 END TEST event_scheduler 00:06:49.895 ************************************ 00:06:49.895 09:25:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:49.895 09:25:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:49.895 09:25:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.895 09:25:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.895 09:25:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.895 ************************************ 00:06:49.895 START TEST app_repeat 00:06:49.895 ************************************ 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=480184 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 480184' 00:06:49.895 Process app_repeat pid: 480184 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:49.895 spdk_app_start Round 0 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 480184 /var/tmp/spdk-nbd.sock 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 480184 ']' 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.895 09:25:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.895 09:25:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:49.895 [2024-10-07 09:25:45.339624] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:49.895 [2024-10-07 09:25:45.339707] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480184 ] 00:06:49.895 [2024-10-07 09:25:45.414206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.154 [2024-10-07 09:25:45.506186] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.154 [2024-10-07 09:25:45.506188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.722 09:25:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.722 09:25:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:50.722 09:25:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.981 Malloc0 00:06:50.981 09:25:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.240 Malloc1 00:06:51.240 09:25:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.240 09:25:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.240 09:25:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.240 09:25:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.240 09:25:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.240 09:25:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.241 09:25:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.500 /dev/nbd0 00:06:51.500 09:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.500 09:25:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.500 1+0 records in 00:06:51.500 1+0 records out 00:06:51.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022752 s, 18.0 MB/s 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.500 09:25:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.500 09:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.500 09:25:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.500 09:25:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.759 /dev/nbd1 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.759 1+0 records in 00:06:51.759 1+0 records out 00:06:51.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253746 s, 16.1 MB/s 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.759 09:25:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.759 09:25:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.019 { 00:06:52.019 "nbd_device": "/dev/nbd0", 00:06:52.019 "bdev_name": "Malloc0" 00:06:52.019 }, 00:06:52.019 { 00:06:52.019 "nbd_device": "/dev/nbd1", 00:06:52.019 "bdev_name": "Malloc1" 00:06:52.019 } 00:06:52.019 ]' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.019 { 00:06:52.019 "nbd_device": "/dev/nbd0", 00:06:52.019 "bdev_name": "Malloc0" 00:06:52.019 }, 00:06:52.019 { 00:06:52.019 "nbd_device": "/dev/nbd1", 00:06:52.019 "bdev_name": "Malloc1" 00:06:52.019 } 00:06:52.019 ]' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.019 /dev/nbd1' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.019 /dev/nbd1' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.019 256+0 records in 00:06:52.019 256+0 records out 00:06:52.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115275 s, 91.0 MB/s 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.019 256+0 records in 00:06:52.019 256+0 records out 00:06:52.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204122 s, 51.4 MB/s 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.019 256+0 records in 00:06:52.019 256+0 records out 00:06:52.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236413 s, 44.4 MB/s 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.019 09:25:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.278 09:25:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.538 09:25:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.797 09:25:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.797 09:25:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.797 09:25:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.798 09:25:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.798 09:25:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.056 09:25:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.056 [2024-10-07 09:25:48.591211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.315 [2024-10-07 09:25:48.672462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.315 [2024-10-07 09:25:48.672464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.315 [2024-10-07 09:25:48.720007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.315 [2024-10-07 09:25:48.720065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.849 09:25:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:55.849 09:25:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:55.849 spdk_app_start Round 1 00:06:55.849 09:25:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 480184 /var/tmp/spdk-nbd.sock 00:06:55.849 09:25:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 480184 ']' 00:06:55.849 09:25:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.849 09:25:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.849 09:25:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.849 09:25:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.849 09:25:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.107 09:25:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.107 09:25:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:56.107 09:25:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.365 Malloc0 00:06:56.365 09:25:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.624 Malloc1 00:06:56.624 09:25:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.624 09:25:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.883 /dev/nbd0 00:06:56.883 09:25:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.883 09:25:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.883 1+0 records in 00:06:56.883 1+0 records out 00:06:56.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241534 s, 17.0 MB/s 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.883 09:25:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.883 09:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.883 09:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.883 09:25:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.142 /dev/nbd1 00:06:57.142 09:25:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.142 09:25:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.142 1+0 records in 00:06:57.142 1+0 records out 00:06:57.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258973 s, 15.8 MB/s 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.142 09:25:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.142 09:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.142 09:25:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.142 09:25:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.143 09:25:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.143 09:25:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.401 09:25:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.401 { 00:06:57.401 "nbd_device": "/dev/nbd0", 00:06:57.401 "bdev_name": "Malloc0" 00:06:57.401 }, 00:06:57.401 { 00:06:57.401 "nbd_device": "/dev/nbd1", 00:06:57.401 "bdev_name": "Malloc1" 00:06:57.401 } 00:06:57.401 ]' 00:06:57.401 09:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.401 { 00:06:57.401 "nbd_device": "/dev/nbd0", 00:06:57.401 "bdev_name": "Malloc0" 00:06:57.401 }, 00:06:57.401 { 00:06:57.401 "nbd_device": "/dev/nbd1", 00:06:57.401 "bdev_name": "Malloc1" 00:06:57.401 } 00:06:57.401 ]' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.402 /dev/nbd1' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.402 /dev/nbd1' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.402 256+0 records in 00:06:57.402 256+0 records out 00:06:57.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102299 s, 103 MB/s 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.402 256+0 records in 00:06:57.402 256+0 records out 00:06:57.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205979 s, 50.9 MB/s 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.402 256+0 records in 00:06:57.402 256+0 records out 00:06:57.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223392 s, 46.9 MB/s 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.402 09:25:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.661 09:25:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.919 09:25:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.919 09:25:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.919 09:25:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.919 09:25:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.919 09:25:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.920 09:25:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.920 09:25:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.920 09:25:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.920 09:25:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.920 09:25:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.920 09:25:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.179 09:25:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.179 09:25:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.179 09:25:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.438 [2024-10-07 09:25:53.927930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.697 [2024-10-07 09:25:54.009104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.697 [2024-10-07 09:25:54.009106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.697 [2024-10-07 09:25:54.057700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.697 [2024-10-07 09:25:54.057740] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.235 09:25:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.235 09:25:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:01.235 spdk_app_start Round 2 00:07:01.235 09:25:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 480184 /var/tmp/spdk-nbd.sock 00:07:01.235 09:25:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 480184 ']' 00:07:01.235 09:25:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.235 09:25:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.235 09:25:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.235 09:25:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.235 09:25:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.494 09:25:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.494 09:25:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.494 09:25:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.754 Malloc0 00:07:01.754 09:25:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.014 Malloc1 00:07:02.014 09:25:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.014 /dev/nbd0 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.014 09:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.014 09:25:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.274 1+0 records in 00:07:02.274 1+0 records out 00:07:02.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234178 s, 17.5 MB/s 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:02.274 09:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.274 09:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.274 09:25:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.274 /dev/nbd1 00:07:02.274 09:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.274 09:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.274 1+0 records in 00:07:02.274 1+0 records out 00:07:02.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241213 s, 17.0 MB/s 00:07:02.274 09:25:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:02.534 09:25:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:02.534 09:25:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:02.534 09:25:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.534 09:25:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:02.534 09:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.534 09:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.534 09:25:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.534 09:25:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.534 09:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.534 { 00:07:02.534 "nbd_device": "/dev/nbd0", 00:07:02.534 "bdev_name": "Malloc0" 00:07:02.534 }, 00:07:02.534 { 00:07:02.534 "nbd_device": "/dev/nbd1", 00:07:02.534 "bdev_name": "Malloc1" 00:07:02.534 } 00:07:02.534 ]' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.534 { 00:07:02.534 "nbd_device": "/dev/nbd0", 00:07:02.534 "bdev_name": "Malloc0" 00:07:02.534 }, 00:07:02.534 { 00:07:02.534 "nbd_device": "/dev/nbd1", 00:07:02.534 "bdev_name": "Malloc1" 00:07:02.534 } 00:07:02.534 ]' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.534 /dev/nbd1' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.534 /dev/nbd1' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.534 09:25:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.794 256+0 records in 00:07:02.794 256+0 records out 00:07:02.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114974 s, 91.2 MB/s 00:07:02.794 09:25:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.794 09:25:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.794 256+0 records in 00:07:02.794 256+0 records out 00:07:02.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207782 s, 50.5 MB/s 00:07:02.794 09:25:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.794 09:25:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.795 256+0 records in 00:07:02.795 256+0 records out 00:07:02.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218976 s, 47.9 MB/s 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.795 09:25:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.053 09:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.311 09:25:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.311 09:25:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.570 09:25:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.829 [2024-10-07 09:25:59.280257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.829 [2024-10-07 09:25:59.368892] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.829 [2024-10-07 09:25:59.368894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.089 [2024-10-07 09:25:59.416607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.089 [2024-10-07 09:25:59.416660] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.628 09:26:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 480184 /var/tmp/spdk-nbd.sock 00:07:06.628 09:26:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 480184 ']' 00:07:06.628 09:26:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.628 09:26:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.628 09:26:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.628 09:26:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.628 09:26:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:06.888 09:26:02 event.app_repeat -- event/event.sh@39 -- # killprocess 480184 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 480184 ']' 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 480184 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480184 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480184' 00:07:06.888 killing process with pid 480184 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@969 -- # kill 480184 00:07:06.888 09:26:02 event.app_repeat -- common/autotest_common.sh@974 -- # wait 480184 00:07:07.148 spdk_app_start is called in Round 0. 00:07:07.148 Shutdown signal received, stop current app iteration 00:07:07.148 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:07:07.148 spdk_app_start is called in Round 1. 00:07:07.148 Shutdown signal received, stop current app iteration 00:07:07.148 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:07:07.148 spdk_app_start is called in Round 2. 00:07:07.148 Shutdown signal received, stop current app iteration 00:07:07.148 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:07:07.148 spdk_app_start is called in Round 3. 00:07:07.148 Shutdown signal received, stop current app iteration 00:07:07.148 09:26:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:07.148 09:26:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:07.148 00:07:07.148 real 0m17.187s 00:07:07.148 user 0m36.664s 00:07:07.148 sys 0m3.389s 00:07:07.148 09:26:02 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.148 09:26:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.148 ************************************ 00:07:07.148 END TEST app_repeat 00:07:07.148 ************************************ 00:07:07.148 09:26:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:07.148 09:26:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.148 09:26:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.148 09:26:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.148 09:26:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.148 ************************************ 00:07:07.148 START TEST cpu_locks 00:07:07.148 ************************************ 00:07:07.148 09:26:02 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:07.148 * Looking for test storage... 00:07:07.148 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:07.148 09:26:02 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.148 09:26:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.148 09:26:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.408 09:26:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.408 --rc genhtml_branch_coverage=1 00:07:07.408 --rc genhtml_function_coverage=1 00:07:07.408 --rc genhtml_legend=1 00:07:07.408 --rc geninfo_all_blocks=1 00:07:07.408 --rc geninfo_unexecuted_blocks=1 00:07:07.408 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.408 ' 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.408 --rc genhtml_branch_coverage=1 00:07:07.408 --rc genhtml_function_coverage=1 00:07:07.408 --rc genhtml_legend=1 00:07:07.408 --rc geninfo_all_blocks=1 00:07:07.408 --rc geninfo_unexecuted_blocks=1 00:07:07.408 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.408 ' 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.408 --rc genhtml_branch_coverage=1 00:07:07.408 --rc genhtml_function_coverage=1 00:07:07.408 --rc genhtml_legend=1 00:07:07.408 --rc geninfo_all_blocks=1 00:07:07.408 --rc geninfo_unexecuted_blocks=1 00:07:07.408 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.408 ' 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.408 --rc genhtml_branch_coverage=1 00:07:07.408 --rc genhtml_function_coverage=1 00:07:07.408 --rc genhtml_legend=1 00:07:07.408 --rc geninfo_all_blocks=1 00:07:07.408 --rc geninfo_unexecuted_blocks=1 00:07:07.408 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.408 ' 00:07:07.408 09:26:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:07.408 09:26:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:07.408 09:26:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:07.408 09:26:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.408 09:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.408 ************************************ 00:07:07.408 START TEST default_locks 00:07:07.408 ************************************ 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=482695 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 482695 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 482695 ']' 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.408 09:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.408 [2024-10-07 09:26:02.811566] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:07.409 [2024-10-07 09:26:02.811620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482695 ] 00:07:07.409 [2024-10-07 09:26:02.884480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.668 [2024-10-07 09:26:02.974889] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.668 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.668 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:07.668 09:26:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 482695 00:07:07.668 09:26:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 482695 00:07:07.668 09:26:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.238 lslocks: write error 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 482695 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 482695 ']' 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 482695 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482695 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482695' 00:07:08.238 killing process with pid 482695 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 482695 00:07:08.238 09:26:03 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 482695 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 482695 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 482695 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 482695 00:07:08.497 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 482695 ']' 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.498 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (482695) - No such process 00:07:08.498 ERROR: process (pid: 482695) is no longer running 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.498 00:07:08.498 real 0m1.239s 00:07:08.498 user 0m1.174s 00:07:08.498 sys 0m0.578s 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.498 09:26:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.498 ************************************ 00:07:08.498 END TEST default_locks 00:07:08.498 ************************************ 00:07:08.758 09:26:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:08.758 09:26:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.758 09:26:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.758 09:26:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.758 ************************************ 00:07:08.758 START TEST default_locks_via_rpc 00:07:08.758 ************************************ 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=482904 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 482904 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 482904 ']' 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.758 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.758 [2024-10-07 09:26:04.129774] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:08.758 [2024-10-07 09:26:04.129913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482904 ] 00:07:08.758 [2024-10-07 09:26:04.204708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.758 [2024-10-07 09:26:04.293038] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 482904 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 482904 00:07:09.698 09:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 482904 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 482904 ']' 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 482904 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482904 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482904' 00:07:09.698 killing process with pid 482904 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 482904 00:07:09.698 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 482904 00:07:10.334 00:07:10.334 real 0m1.486s 00:07:10.334 user 0m1.557s 00:07:10.334 sys 0m0.526s 00:07:10.334 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.334 09:26:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 ************************************ 00:07:10.334 END TEST default_locks_via_rpc 00:07:10.334 ************************************ 00:07:10.334 09:26:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:10.334 09:26:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.334 09:26:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.334 09:26:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 ************************************ 00:07:10.334 START TEST non_locking_app_on_locked_coremask 00:07:10.334 ************************************ 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=483115 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 483115 /var/tmp/spdk.sock 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 483115 ']' 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.334 09:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 [2024-10-07 09:26:05.704657] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:10.334 [2024-10-07 09:26:05.704725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483115 ] 00:07:10.334 [2024-10-07 09:26:05.779841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.334 [2024-10-07 09:26:05.866912] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=483223 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 483223 /var/tmp/spdk2.sock 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 483223 ']' 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.274 09:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.274 [2024-10-07 09:26:06.601059] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:11.274 [2024-10-07 09:26:06.601123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483223 ] 00:07:11.274 [2024-10-07 09:26:06.697863] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.274 [2024-10-07 09:26:06.697889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.534 [2024-10-07 09:26:06.858697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.103 09:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.103 09:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.103 09:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 483115 00:07:12.103 09:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 483115 00:07:12.103 09:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.673 lslocks: write error 00:07:12.673 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 483115 00:07:12.673 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 483115 ']' 00:07:12.673 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 483115 00:07:12.673 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.673 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.673 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483115 00:07:12.936 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.936 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.937 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483115' 00:07:12.937 killing process with pid 483115 00:07:12.937 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 483115 00:07:12.937 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 483115 00:07:13.509 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 483223 00:07:13.509 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 483223 ']' 00:07:13.509 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 483223 00:07:13.509 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.509 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.509 09:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483223 00:07:13.509 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.509 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.509 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483223' 00:07:13.509 killing process with pid 483223 00:07:13.509 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 483223 00:07:13.509 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 483223 00:07:14.078 00:07:14.078 real 0m3.694s 00:07:14.078 user 0m3.957s 00:07:14.078 sys 0m1.189s 00:07:14.078 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.078 09:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.078 ************************************ 00:07:14.078 END TEST non_locking_app_on_locked_coremask 00:07:14.078 ************************************ 00:07:14.078 09:26:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:14.078 09:26:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.078 09:26:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.078 09:26:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.078 ************************************ 00:07:14.078 START TEST locking_app_on_unlocked_coremask 00:07:14.078 ************************************ 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=483677 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 483677 /var/tmp/spdk.sock 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 483677 ']' 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.078 09:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:14.078 [2024-10-07 09:26:09.467108] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:14.078 [2024-10-07 09:26:09.467166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483677 ] 00:07:14.078 [2024-10-07 09:26:09.540716] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.078 [2024-10-07 09:26:09.540748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.078 [2024-10-07 09:26:09.632525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.016 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=483695 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 483695 /var/tmp/spdk2.sock 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 483695 ']' 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.017 09:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.017 [2024-10-07 09:26:10.354045] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:15.017 [2024-10-07 09:26:10.354112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483695 ] 00:07:15.017 [2024-10-07 09:26:10.454079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.276 [2024-10-07 09:26:10.621750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.846 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.846 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.846 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 483695 00:07:15.846 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.846 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 483695 00:07:16.417 lslocks: write error 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 483677 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 483677 ']' 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 483677 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483677 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483677' 00:07:16.417 killing process with pid 483677 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 483677 00:07:16.417 09:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 483677 00:07:16.988 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 483695 00:07:16.988 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 483695 ']' 00:07:16.988 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 483695 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483695 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483695' 00:07:17.248 killing process with pid 483695 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 483695 00:07:17.248 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 483695 00:07:17.508 00:07:17.508 real 0m3.505s 00:07:17.508 user 0m3.747s 00:07:17.508 sys 0m1.119s 00:07:17.508 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.508 09:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.508 ************************************ 00:07:17.508 END TEST locking_app_on_unlocked_coremask 00:07:17.508 ************************************ 00:07:17.508 09:26:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:17.508 09:26:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.508 09:26:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.508 09:26:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.508 ************************************ 00:07:17.508 START TEST locking_app_on_locked_coremask 00:07:17.508 ************************************ 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=484079 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 484079 /var/tmp/spdk.sock 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 484079 ']' 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.508 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.508 [2024-10-07 09:26:13.062924] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:17.508 [2024-10-07 09:26:13.063004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484079 ] 00:07:17.768 [2024-10-07 09:26:13.141355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.768 [2024-10-07 09:26:13.225511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=484254 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 484254 /var/tmp/spdk2.sock 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 484254 /var/tmp/spdk2.sock 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 484254 /var/tmp/spdk2.sock 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 484254 ']' 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.705 09:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.705 [2024-10-07 09:26:13.981878] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:18.705 [2024-10-07 09:26:13.981964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484254 ] 00:07:18.705 [2024-10-07 09:26:14.080163] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 484079 has claimed it. 00:07:18.705 [2024-10-07 09:26:14.080211] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.274 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (484254) - No such process 00:07:19.274 ERROR: process (pid: 484254) is no longer running 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 484079 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 484079 00:07:19.274 09:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.534 lslocks: write error 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 484079 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 484079 ']' 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 484079 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484079 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484079' 00:07:19.534 killing process with pid 484079 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 484079 00:07:19.534 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 484079 00:07:20.103 00:07:20.103 real 0m2.368s 00:07:20.103 user 0m2.618s 00:07:20.103 sys 0m0.733s 00:07:20.103 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.103 09:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.103 ************************************ 00:07:20.103 END TEST locking_app_on_locked_coremask 00:07:20.103 ************************************ 00:07:20.103 09:26:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:20.103 09:26:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.103 09:26:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.103 09:26:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.103 ************************************ 00:07:20.103 START TEST locking_overlapped_coremask 00:07:20.103 ************************************ 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=484465 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 484465 /var/tmp/spdk.sock 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 484465 ']' 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.103 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.104 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.104 09:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.104 [2024-10-07 09:26:15.519384] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:20.104 [2024-10-07 09:26:15.519452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484465 ] 00:07:20.104 [2024-10-07 09:26:15.593450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.363 [2024-10-07 09:26:15.681998] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.363 [2024-10-07 09:26:15.682074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.363 [2024-10-07 09:26:15.682076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=484642 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 484642 /var/tmp/spdk2.sock 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 484642 /var/tmp/spdk2.sock 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 484642 /var/tmp/spdk2.sock 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 484642 ']' 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.933 09:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.933 [2024-10-07 09:26:16.406887] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:20.933 [2024-10-07 09:26:16.406958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484642 ] 00:07:21.192 [2024-10-07 09:26:16.506514] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 484465 has claimed it. 00:07:21.192 [2024-10-07 09:26:16.506553] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.762 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (484642) - No such process 00:07:21.762 ERROR: process (pid: 484642) is no longer running 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 484465 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 484465 ']' 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 484465 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484465 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484465' 00:07:21.762 killing process with pid 484465 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 484465 00:07:21.762 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 484465 00:07:22.022 00:07:22.022 real 0m1.980s 00:07:22.022 user 0m5.548s 00:07:22.022 sys 0m0.506s 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.022 ************************************ 00:07:22.022 END TEST locking_overlapped_coremask 00:07:22.022 ************************************ 00:07:22.022 09:26:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:22.022 09:26:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.022 09:26:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.022 09:26:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.022 ************************************ 00:07:22.022 START TEST locking_overlapped_coremask_via_rpc 00:07:22.022 ************************************ 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=484846 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 484846 /var/tmp/spdk.sock 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 484846 ']' 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.022 09:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.022 [2024-10-07 09:26:17.583343] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:22.022 [2024-10-07 09:26:17.583423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484846 ] 00:07:22.281 [2024-10-07 09:26:17.659102] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.281 [2024-10-07 09:26:17.659130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.281 [2024-10-07 09:26:17.748073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.281 [2024-10-07 09:26:17.748158] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.281 [2024-10-07 09:26:17.748159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=484908 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 484908 /var/tmp/spdk2.sock 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 484908 ']' 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.218 09:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.218 [2024-10-07 09:26:18.478365] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:23.218 [2024-10-07 09:26:18.478443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484908 ] 00:07:23.218 [2024-10-07 09:26:18.582516] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.218 [2024-10-07 09:26:18.582549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.218 [2024-10-07 09:26:18.755662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.218 [2024-10-07 09:26:18.755769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.218 [2024-10-07 09:26:18.755770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.169 [2024-10-07 09:26:19.392888] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 484846 has claimed it. 00:07:24.169 request: 00:07:24.169 { 00:07:24.169 "method": "framework_enable_cpumask_locks", 00:07:24.169 "req_id": 1 00:07:24.169 } 00:07:24.169 Got JSON-RPC error response 00:07:24.169 response: 00:07:24.169 { 00:07:24.169 "code": -32603, 00:07:24.169 "message": "Failed to claim CPU core: 2" 00:07:24.169 } 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 484846 /var/tmp/spdk.sock 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 484846 ']' 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 484908 /var/tmp/spdk2.sock 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 484908 ']' 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.169 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.429 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.429 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:24.429 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:24.430 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.430 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.430 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.430 00:07:24.430 real 0m2.252s 00:07:24.430 user 0m0.979s 00:07:24.430 sys 0m0.208s 00:07:24.430 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.430 09:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.430 ************************************ 00:07:24.430 END TEST locking_overlapped_coremask_via_rpc 00:07:24.430 ************************************ 00:07:24.430 09:26:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:24.430 09:26:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 484846 ]] 00:07:24.430 09:26:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 484846 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 484846 ']' 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 484846 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484846 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484846' 00:07:24.430 killing process with pid 484846 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 484846 00:07:24.430 09:26:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 484846 00:07:24.998 09:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 484908 ]] 00:07:24.998 09:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 484908 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 484908 ']' 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 484908 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484908 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484908' 00:07:24.998 killing process with pid 484908 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 484908 00:07:24.998 09:26:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 484908 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 484846 ]] 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 484846 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 484846 ']' 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 484846 00:07:25.258 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (484846) - No such process 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 484846 is not found' 00:07:25.258 Process with pid 484846 is not found 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 484908 ]] 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 484908 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 484908 ']' 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 484908 00:07:25.258 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (484908) - No such process 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 484908 is not found' 00:07:25.258 Process with pid 484908 is not found 00:07:25.258 09:26:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.258 00:07:25.258 real 0m18.133s 00:07:25.258 user 0m31.193s 00:07:25.258 sys 0m5.993s 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.258 09:26:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.258 ************************************ 00:07:25.258 END TEST cpu_locks 00:07:25.258 ************************************ 00:07:25.258 00:07:25.258 real 0m44.278s 00:07:25.258 user 1m22.372s 00:07:25.258 sys 0m10.576s 00:07:25.258 09:26:20 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.258 09:26:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.258 ************************************ 00:07:25.258 END TEST event 00:07:25.258 ************************************ 00:07:25.258 09:26:20 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:25.258 09:26:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.258 09:26:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.258 09:26:20 -- common/autotest_common.sh@10 -- # set +x 00:07:25.258 ************************************ 00:07:25.258 START TEST thread 00:07:25.258 ************************************ 00:07:25.258 09:26:20 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:25.517 * Looking for test storage... 00:07:25.517 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.517 09:26:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.517 09:26:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.517 09:26:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.517 09:26:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.517 09:26:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.517 09:26:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.517 09:26:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.517 09:26:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.517 09:26:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.517 09:26:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.517 09:26:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.517 09:26:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:25.517 09:26:20 thread -- scripts/common.sh@345 -- # : 1 00:07:25.517 09:26:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.517 09:26:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.517 09:26:20 thread -- scripts/common.sh@365 -- # decimal 1 00:07:25.517 09:26:20 thread -- scripts/common.sh@353 -- # local d=1 00:07:25.517 09:26:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.517 09:26:20 thread -- scripts/common.sh@355 -- # echo 1 00:07:25.517 09:26:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.517 09:26:20 thread -- scripts/common.sh@366 -- # decimal 2 00:07:25.517 09:26:20 thread -- scripts/common.sh@353 -- # local d=2 00:07:25.517 09:26:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.517 09:26:20 thread -- scripts/common.sh@355 -- # echo 2 00:07:25.517 09:26:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.517 09:26:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.517 09:26:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.517 09:26:20 thread -- scripts/common.sh@368 -- # return 0 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.517 --rc genhtml_branch_coverage=1 00:07:25.517 --rc genhtml_function_coverage=1 00:07:25.517 --rc genhtml_legend=1 00:07:25.517 --rc geninfo_all_blocks=1 00:07:25.517 --rc geninfo_unexecuted_blocks=1 00:07:25.517 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.517 ' 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.517 --rc genhtml_branch_coverage=1 00:07:25.517 --rc genhtml_function_coverage=1 00:07:25.517 --rc genhtml_legend=1 00:07:25.517 --rc geninfo_all_blocks=1 00:07:25.517 --rc geninfo_unexecuted_blocks=1 00:07:25.517 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.517 ' 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.517 --rc genhtml_branch_coverage=1 00:07:25.517 --rc genhtml_function_coverage=1 00:07:25.517 --rc genhtml_legend=1 00:07:25.517 --rc geninfo_all_blocks=1 00:07:25.517 --rc geninfo_unexecuted_blocks=1 00:07:25.517 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.517 ' 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.517 --rc genhtml_branch_coverage=1 00:07:25.517 --rc genhtml_function_coverage=1 00:07:25.517 --rc genhtml_legend=1 00:07:25.517 --rc geninfo_all_blocks=1 00:07:25.517 --rc geninfo_unexecuted_blocks=1 00:07:25.517 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.517 ' 00:07:25.517 09:26:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.517 09:26:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.517 ************************************ 00:07:25.517 START TEST thread_poller_perf 00:07:25.517 ************************************ 00:07:25.517 09:26:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.517 [2024-10-07 09:26:21.037175] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:25.517 [2024-10-07 09:26:21.037261] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485317 ] 00:07:25.775 [2024-10-07 09:26:21.117475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.775 [2024-10-07 09:26:21.203149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.775 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:27.152 ====================================== 00:07:27.152 busy:2304635412 (cyc) 00:07:27.152 total_run_count: 809000 00:07:27.152 tsc_hz: 2300000000 (cyc) 00:07:27.152 ====================================== 00:07:27.152 poller_cost: 2848 (cyc), 1238 (nsec) 00:07:27.152 00:07:27.152 real 0m1.265s 00:07:27.152 user 0m1.161s 00:07:27.152 sys 0m0.099s 00:07:27.152 09:26:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.152 09:26:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.152 ************************************ 00:07:27.152 END TEST thread_poller_perf 00:07:27.152 ************************************ 00:07:27.152 09:26:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:27.152 09:26:22 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:27.152 09:26:22 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.152 09:26:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.152 ************************************ 00:07:27.152 START TEST thread_poller_perf 00:07:27.152 ************************************ 00:07:27.152 09:26:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:27.152 [2024-10-07 09:26:22.390331] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:27.152 [2024-10-07 09:26:22.390421] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485521 ] 00:07:27.152 [2024-10-07 09:26:22.470305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.152 [2024-10-07 09:26:22.559756] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.152 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:28.090 ====================================== 00:07:28.090 busy:2301365026 (cyc) 00:07:28.090 total_run_count: 13134000 00:07:28.090 tsc_hz: 2300000000 (cyc) 00:07:28.090 ====================================== 00:07:28.090 poller_cost: 175 (cyc), 76 (nsec) 00:07:28.090 00:07:28.090 real 0m1.265s 00:07:28.090 user 0m1.159s 00:07:28.090 sys 0m0.101s 00:07:28.090 09:26:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.090 09:26:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.090 ************************************ 00:07:28.090 END TEST thread_poller_perf 00:07:28.090 ************************************ 00:07:28.349 09:26:23 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:28.349 09:26:23 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:28.349 09:26:23 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.349 09:26:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.349 09:26:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.349 ************************************ 00:07:28.349 START TEST thread_spdk_lock 00:07:28.349 ************************************ 00:07:28.349 09:26:23 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:28.349 [2024-10-07 09:26:23.741062] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:28.349 [2024-10-07 09:26:23.741149] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485744 ] 00:07:28.349 [2024-10-07 09:26:23.819463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.349 [2024-10-07 09:26:23.901100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.349 [2024-10-07 09:26:23.901102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.919 [2024-10-07 09:26:24.397499] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:28.919 [2024-10-07 09:26:24.397539] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3099:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:28.919 [2024-10-07 09:26:24.397566] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3054:sspin_stacks_print: *ERROR*: spinlock 0x14c6500 00:07:28.919 [2024-10-07 09:26:24.398426] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:28.919 [2024-10-07 09:26:24.398533] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:28.919 [2024-10-07 09:26:24.398552] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:28.919 Starting test contend 00:07:28.919 Worker Delay Wait us Hold us Total us 00:07:28.919 0 3 169638 190728 360366 00:07:28.919 1 5 88534 288214 376749 00:07:28.919 PASS test contend 00:07:28.919 Starting test hold_by_poller 00:07:28.919 PASS test hold_by_poller 00:07:28.919 Starting test hold_by_message 00:07:28.919 PASS test hold_by_message 00:07:28.919 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:28.919 100014 assertions passed 00:07:28.919 0 assertions failed 00:07:28.919 00:07:28.919 real 0m0.749s 00:07:28.919 user 0m1.150s 00:07:28.919 sys 0m0.092s 00:07:28.919 09:26:24 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.919 09:26:24 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:28.919 ************************************ 00:07:28.919 END TEST thread_spdk_lock 00:07:28.919 ************************************ 00:07:29.178 00:07:29.178 real 0m3.704s 00:07:29.178 user 0m3.663s 00:07:29.178 sys 0m0.560s 00:07:29.178 09:26:24 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.178 09:26:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.178 ************************************ 00:07:29.178 END TEST thread 00:07:29.178 ************************************ 00:07:29.178 09:26:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:29.178 09:26:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.178 09:26:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.178 09:26:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.178 09:26:24 -- common/autotest_common.sh@10 -- # set +x 00:07:29.178 ************************************ 00:07:29.178 START TEST app_cmdline 00:07:29.178 ************************************ 00:07:29.178 09:26:24 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.178 * Looking for test storage... 00:07:29.178 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:29.178 09:26:24 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.178 09:26:24 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.178 09:26:24 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.439 09:26:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.439 --rc genhtml_branch_coverage=1 00:07:29.439 --rc genhtml_function_coverage=1 00:07:29.439 --rc genhtml_legend=1 00:07:29.439 --rc geninfo_all_blocks=1 00:07:29.439 --rc geninfo_unexecuted_blocks=1 00:07:29.439 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.439 ' 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.439 --rc genhtml_branch_coverage=1 00:07:29.439 --rc genhtml_function_coverage=1 00:07:29.439 --rc genhtml_legend=1 00:07:29.439 --rc geninfo_all_blocks=1 00:07:29.439 --rc geninfo_unexecuted_blocks=1 00:07:29.439 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.439 ' 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.439 --rc genhtml_branch_coverage=1 00:07:29.439 --rc genhtml_function_coverage=1 00:07:29.439 --rc genhtml_legend=1 00:07:29.439 --rc geninfo_all_blocks=1 00:07:29.439 --rc geninfo_unexecuted_blocks=1 00:07:29.439 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.439 ' 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.439 --rc genhtml_branch_coverage=1 00:07:29.439 --rc genhtml_function_coverage=1 00:07:29.439 --rc genhtml_legend=1 00:07:29.439 --rc geninfo_all_blocks=1 00:07:29.439 --rc geninfo_unexecuted_blocks=1 00:07:29.439 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:29.439 ' 00:07:29.439 09:26:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:29.439 09:26:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=485950 00:07:29.439 09:26:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:29.439 09:26:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 485950 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 485950 ']' 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.439 09:26:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.439 [2024-10-07 09:26:24.824224] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:29.439 [2024-10-07 09:26:24.824315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485950 ] 00:07:29.439 [2024-10-07 09:26:24.895122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.439 [2024-10-07 09:26:24.984335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:30.379 { 00:07:30.379 "version": "SPDK v25.01-pre git sha1 3950cd1bb", 00:07:30.379 "fields": { 00:07:30.379 "major": 25, 00:07:30.379 "minor": 1, 00:07:30.379 "patch": 0, 00:07:30.379 "suffix": "-pre", 00:07:30.379 "commit": "3950cd1bb" 00:07:30.379 } 00:07:30.379 } 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.379 09:26:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:30.379 09:26:25 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.640 request: 00:07:30.640 { 00:07:30.640 "method": "env_dpdk_get_mem_stats", 00:07:30.640 "req_id": 1 00:07:30.640 } 00:07:30.640 Got JSON-RPC error response 00:07:30.640 response: 00:07:30.640 { 00:07:30.640 "code": -32601, 00:07:30.640 "message": "Method not found" 00:07:30.640 } 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.640 09:26:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 485950 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 485950 ']' 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 485950 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 485950 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 485950' 00:07:30.640 killing process with pid 485950 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@969 -- # kill 485950 00:07:30.640 09:26:26 app_cmdline -- common/autotest_common.sh@974 -- # wait 485950 00:07:31.217 00:07:31.217 real 0m1.929s 00:07:31.217 user 0m2.217s 00:07:31.217 sys 0m0.558s 00:07:31.217 09:26:26 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.217 09:26:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.217 ************************************ 00:07:31.217 END TEST app_cmdline 00:07:31.217 ************************************ 00:07:31.217 09:26:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:31.217 09:26:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.217 09:26:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.217 09:26:26 -- common/autotest_common.sh@10 -- # set +x 00:07:31.217 ************************************ 00:07:31.217 START TEST version 00:07:31.217 ************************************ 00:07:31.217 09:26:26 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:31.217 * Looking for test storage... 00:07:31.217 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:31.217 09:26:26 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.217 09:26:26 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.217 09:26:26 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.477 09:26:26 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.477 09:26:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.477 09:26:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.477 09:26:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.477 09:26:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.477 09:26:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.477 09:26:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.477 09:26:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.477 09:26:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.477 09:26:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.477 09:26:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.477 09:26:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.477 09:26:26 version -- scripts/common.sh@344 -- # case "$op" in 00:07:31.477 09:26:26 version -- scripts/common.sh@345 -- # : 1 00:07:31.477 09:26:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.477 09:26:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.477 09:26:26 version -- scripts/common.sh@365 -- # decimal 1 00:07:31.477 09:26:26 version -- scripts/common.sh@353 -- # local d=1 00:07:31.477 09:26:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.477 09:26:26 version -- scripts/common.sh@355 -- # echo 1 00:07:31.477 09:26:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.478 09:26:26 version -- scripts/common.sh@366 -- # decimal 2 00:07:31.478 09:26:26 version -- scripts/common.sh@353 -- # local d=2 00:07:31.478 09:26:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.478 09:26:26 version -- scripts/common.sh@355 -- # echo 2 00:07:31.478 09:26:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.478 09:26:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.478 09:26:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.478 09:26:26 version -- scripts/common.sh@368 -- # return 0 00:07:31.478 09:26:26 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.478 09:26:26 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.478 --rc genhtml_branch_coverage=1 00:07:31.478 --rc genhtml_function_coverage=1 00:07:31.478 --rc genhtml_legend=1 00:07:31.478 --rc geninfo_all_blocks=1 00:07:31.478 --rc geninfo_unexecuted_blocks=1 00:07:31.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.478 ' 00:07:31.478 09:26:26 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.478 --rc genhtml_branch_coverage=1 00:07:31.478 --rc genhtml_function_coverage=1 00:07:31.478 --rc genhtml_legend=1 00:07:31.478 --rc geninfo_all_blocks=1 00:07:31.478 --rc geninfo_unexecuted_blocks=1 00:07:31.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.478 ' 00:07:31.478 09:26:26 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.478 --rc genhtml_branch_coverage=1 00:07:31.478 --rc genhtml_function_coverage=1 00:07:31.478 --rc genhtml_legend=1 00:07:31.478 --rc geninfo_all_blocks=1 00:07:31.478 --rc geninfo_unexecuted_blocks=1 00:07:31.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.478 ' 00:07:31.478 09:26:26 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.478 --rc genhtml_branch_coverage=1 00:07:31.478 --rc genhtml_function_coverage=1 00:07:31.478 --rc genhtml_legend=1 00:07:31.478 --rc geninfo_all_blocks=1 00:07:31.478 --rc geninfo_unexecuted_blocks=1 00:07:31.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.478 ' 00:07:31.478 09:26:26 version -- app/version.sh@17 -- # get_header_version major 00:07:31.478 09:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # cut -f2 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.478 09:26:26 version -- app/version.sh@17 -- # major=25 00:07:31.478 09:26:26 version -- app/version.sh@18 -- # get_header_version minor 00:07:31.478 09:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # cut -f2 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.478 09:26:26 version -- app/version.sh@18 -- # minor=1 00:07:31.478 09:26:26 version -- app/version.sh@19 -- # get_header_version patch 00:07:31.478 09:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # cut -f2 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.478 09:26:26 version -- app/version.sh@19 -- # patch=0 00:07:31.478 09:26:26 version -- app/version.sh@20 -- # get_header_version suffix 00:07:31.478 09:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # cut -f2 00:07:31.478 09:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.478 09:26:26 version -- app/version.sh@20 -- # suffix=-pre 00:07:31.478 09:26:26 version -- app/version.sh@22 -- # version=25.1 00:07:31.478 09:26:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:31.478 09:26:26 version -- app/version.sh@28 -- # version=25.1rc0 00:07:31.478 09:26:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:31.478 09:26:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:31.478 09:26:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:31.478 09:26:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:31.478 00:07:31.478 real 0m0.261s 00:07:31.478 user 0m0.153s 00:07:31.478 sys 0m0.158s 00:07:31.478 09:26:26 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.478 09:26:26 version -- common/autotest_common.sh@10 -- # set +x 00:07:31.478 ************************************ 00:07:31.478 END TEST version 00:07:31.478 ************************************ 00:07:31.478 09:26:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@194 -- # uname -s 00:07:31.478 09:26:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:31.478 09:26:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.478 09:26:26 -- common/autotest_common.sh@10 -- # set +x 00:07:31.478 09:26:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:07:31.478 09:26:26 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:07:31.478 09:26:26 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:31.478 09:26:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.478 09:26:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.478 09:26:26 -- common/autotest_common.sh@10 -- # set +x 00:07:31.478 ************************************ 00:07:31.478 START TEST llvm_fuzz 00:07:31.478 ************************************ 00:07:31.478 09:26:26 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:31.738 * Looking for test storage... 00:07:31.738 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.738 09:26:27 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.738 --rc genhtml_branch_coverage=1 00:07:31.738 --rc genhtml_function_coverage=1 00:07:31.738 --rc genhtml_legend=1 00:07:31.738 --rc geninfo_all_blocks=1 00:07:31.738 --rc geninfo_unexecuted_blocks=1 00:07:31.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.738 ' 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.738 --rc genhtml_branch_coverage=1 00:07:31.738 --rc genhtml_function_coverage=1 00:07:31.738 --rc genhtml_legend=1 00:07:31.738 --rc geninfo_all_blocks=1 00:07:31.738 --rc geninfo_unexecuted_blocks=1 00:07:31.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.738 ' 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.738 --rc genhtml_branch_coverage=1 00:07:31.738 --rc genhtml_function_coverage=1 00:07:31.738 --rc genhtml_legend=1 00:07:31.738 --rc geninfo_all_blocks=1 00:07:31.738 --rc geninfo_unexecuted_blocks=1 00:07:31.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.738 ' 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.738 --rc genhtml_branch_coverage=1 00:07:31.738 --rc genhtml_function_coverage=1 00:07:31.738 --rc genhtml_legend=1 00:07:31.738 --rc geninfo_all_blocks=1 00:07:31.738 --rc geninfo_unexecuted_blocks=1 00:07:31.738 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:31.738 ' 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:31.738 09:26:27 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.738 09:26:27 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:31.738 ************************************ 00:07:31.738 START TEST nvmf_llvm_fuzz 00:07:31.738 ************************************ 00:07:31.738 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:31.738 * Looking for test storage... 00:07:31.738 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:31.738 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.738 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.738 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:32.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.002 --rc genhtml_branch_coverage=1 00:07:32.002 --rc genhtml_function_coverage=1 00:07:32.002 --rc genhtml_legend=1 00:07:32.002 --rc geninfo_all_blocks=1 00:07:32.002 --rc geninfo_unexecuted_blocks=1 00:07:32.002 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.002 ' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:32.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.002 --rc genhtml_branch_coverage=1 00:07:32.002 --rc genhtml_function_coverage=1 00:07:32.002 --rc genhtml_legend=1 00:07:32.002 --rc geninfo_all_blocks=1 00:07:32.002 --rc geninfo_unexecuted_blocks=1 00:07:32.002 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.002 ' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:32.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.002 --rc genhtml_branch_coverage=1 00:07:32.002 --rc genhtml_function_coverage=1 00:07:32.002 --rc genhtml_legend=1 00:07:32.002 --rc geninfo_all_blocks=1 00:07:32.002 --rc geninfo_unexecuted_blocks=1 00:07:32.002 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.002 ' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:32.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.002 --rc genhtml_branch_coverage=1 00:07:32.002 --rc genhtml_function_coverage=1 00:07:32.002 --rc genhtml_legend=1 00:07:32.002 --rc geninfo_all_blocks=1 00:07:32.002 --rc geninfo_unexecuted_blocks=1 00:07:32.002 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.002 ' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:32.002 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:32.003 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:32.003 #define SPDK_CONFIG_H 00:07:32.003 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:32.003 #define SPDK_CONFIG_APPS 1 00:07:32.003 #define SPDK_CONFIG_ARCH native 00:07:32.003 #undef SPDK_CONFIG_ASAN 00:07:32.003 #undef SPDK_CONFIG_AVAHI 00:07:32.003 #undef SPDK_CONFIG_CET 00:07:32.003 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:32.003 #define SPDK_CONFIG_COVERAGE 1 00:07:32.003 #define SPDK_CONFIG_CROSS_PREFIX 00:07:32.003 #undef SPDK_CONFIG_CRYPTO 00:07:32.003 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:32.003 #undef SPDK_CONFIG_CUSTOMOCF 00:07:32.003 #undef SPDK_CONFIG_DAOS 00:07:32.003 #define SPDK_CONFIG_DAOS_DIR 00:07:32.003 #define SPDK_CONFIG_DEBUG 1 00:07:32.003 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:32.003 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:32.003 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:32.003 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:32.003 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:32.003 #undef SPDK_CONFIG_DPDK_UADK 00:07:32.003 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:32.003 #define SPDK_CONFIG_EXAMPLES 1 00:07:32.003 #undef SPDK_CONFIG_FC 00:07:32.003 #define SPDK_CONFIG_FC_PATH 00:07:32.003 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:32.003 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:32.003 #define SPDK_CONFIG_FSDEV 1 00:07:32.003 #undef SPDK_CONFIG_FUSE 00:07:32.003 #define SPDK_CONFIG_FUZZER 1 00:07:32.003 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:32.003 #undef SPDK_CONFIG_GOLANG 00:07:32.003 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:32.003 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:32.003 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:32.003 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:32.003 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:32.003 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:32.003 #undef SPDK_CONFIG_HAVE_LZ4 00:07:32.003 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:32.003 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:32.003 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:32.003 #define SPDK_CONFIG_IDXD 1 00:07:32.003 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:32.003 #undef SPDK_CONFIG_IPSEC_MB 00:07:32.003 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:32.003 #define SPDK_CONFIG_ISAL 1 00:07:32.003 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:32.003 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:32.003 #define SPDK_CONFIG_LIBDIR 00:07:32.003 #undef SPDK_CONFIG_LTO 00:07:32.003 #define SPDK_CONFIG_MAX_LCORES 128 00:07:32.003 #define SPDK_CONFIG_NVME_CUSE 1 00:07:32.003 #undef SPDK_CONFIG_OCF 00:07:32.003 #define SPDK_CONFIG_OCF_PATH 00:07:32.003 #define SPDK_CONFIG_OPENSSL_PATH 00:07:32.003 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:32.003 #define SPDK_CONFIG_PGO_DIR 00:07:32.003 #undef SPDK_CONFIG_PGO_USE 00:07:32.003 #define SPDK_CONFIG_PREFIX /usr/local 00:07:32.003 #undef SPDK_CONFIG_RAID5F 00:07:32.003 #undef SPDK_CONFIG_RBD 00:07:32.003 #define SPDK_CONFIG_RDMA 1 00:07:32.003 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:32.003 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:32.003 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:32.003 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:32.003 #undef SPDK_CONFIG_SHARED 00:07:32.003 #undef SPDK_CONFIG_SMA 00:07:32.003 #define SPDK_CONFIG_TESTS 1 00:07:32.003 #undef SPDK_CONFIG_TSAN 00:07:32.003 #define SPDK_CONFIG_UBLK 1 00:07:32.003 #define SPDK_CONFIG_UBSAN 1 00:07:32.003 #undef SPDK_CONFIG_UNIT_TESTS 00:07:32.003 #undef SPDK_CONFIG_URING 00:07:32.003 #define SPDK_CONFIG_URING_PATH 00:07:32.003 #undef SPDK_CONFIG_URING_ZNS 00:07:32.003 #undef SPDK_CONFIG_USDT 00:07:32.003 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:32.003 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:32.003 #define SPDK_CONFIG_VFIO_USER 1 00:07:32.003 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:32.003 #define SPDK_CONFIG_VHOST 1 00:07:32.004 #define SPDK_CONFIG_VIRTIO 1 00:07:32.004 #undef SPDK_CONFIG_VTUNE 00:07:32.004 #define SPDK_CONFIG_VTUNE_DIR 00:07:32.004 #define SPDK_CONFIG_WERROR 1 00:07:32.004 #define SPDK_CONFIG_WPDK_DIR 00:07:32.004 #undef SPDK_CONFIG_XNVME 00:07:32.004 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:32.004 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:32.005 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 486469 ]] 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 486469 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.tl4Afb 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.tl4Afb/tests/nvmf /tmp/spdk.tl4Afb 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86494072832 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500294656 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8006221824 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47246716928 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894159872 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5902336 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249645568 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250149376 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=503808 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.006 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:07:32.007 * Looking for test storage... 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86494072832 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10220814336 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:32.007 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:32.007 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:32.266 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:32.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.267 --rc genhtml_branch_coverage=1 00:07:32.267 --rc genhtml_function_coverage=1 00:07:32.267 --rc genhtml_legend=1 00:07:32.267 --rc geninfo_all_blocks=1 00:07:32.267 --rc geninfo_unexecuted_blocks=1 00:07:32.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.267 ' 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:32.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.267 --rc genhtml_branch_coverage=1 00:07:32.267 --rc genhtml_function_coverage=1 00:07:32.267 --rc genhtml_legend=1 00:07:32.267 --rc geninfo_all_blocks=1 00:07:32.267 --rc geninfo_unexecuted_blocks=1 00:07:32.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.267 ' 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:32.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.267 --rc genhtml_branch_coverage=1 00:07:32.267 --rc genhtml_function_coverage=1 00:07:32.267 --rc genhtml_legend=1 00:07:32.267 --rc geninfo_all_blocks=1 00:07:32.267 --rc geninfo_unexecuted_blocks=1 00:07:32.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.267 ' 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:32.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.267 --rc genhtml_branch_coverage=1 00:07:32.267 --rc genhtml_function_coverage=1 00:07:32.267 --rc genhtml_legend=1 00:07:32.267 --rc geninfo_all_blocks=1 00:07:32.267 --rc geninfo_unexecuted_blocks=1 00:07:32.267 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:32.267 ' 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:32.267 09:26:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:32.267 [2024-10-07 09:26:27.694873] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:32.267 [2024-10-07 09:26:27.694959] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486542 ] 00:07:32.527 [2024-10-07 09:26:27.975676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.527 [2024-10-07 09:26:28.064250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.786 [2024-10-07 09:26:28.123379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.786 [2024-10-07 09:26:28.139578] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:32.786 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.786 INFO: Seed: 171679497 00:07:32.786 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:32.786 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:32.786 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:32.786 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.786 #2 INITED exec/s: 0 rss: 67Mb 00:07:32.786 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.786 This may also happen if the target rejected all inputs we tried so far 00:07:32.786 [2024-10-07 09:26:28.185117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.786 [2024-10-07 09:26:28.185147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.044 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:33.044 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:33.044 #27 NEW cov: 12169 ft: 12165 corp: 2/117b lim: 320 exec/s: 0 rss: 74Mb L: 116/116 MS: 5 InsertByte-InsertByte-CopyPart-CMP-InsertRepeatedBytes- DE: "\377$\364\376\302n%\202"- 00:07:33.044 [2024-10-07 09:26:28.525901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.044 [2024-10-07 09:26:28.525938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.044 #38 NEW cov: 12282 ft: 12774 corp: 3/233b lim: 320 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ChangeByte- 00:07:33.044 [2024-10-07 09:26:28.586046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.044 [2024-10-07 09:26:28.586074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.303 #39 NEW cov: 12288 ft: 12972 corp: 4/349b lim: 320 exec/s: 0 rss: 74Mb L: 116/116 MS: 1 ChangeBit- 00:07:33.303 [2024-10-07 09:26:28.646200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:33.303 [2024-10-07 09:26:28.646229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.303 [2024-10-07 09:26:28.646291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:33.303 [2024-10-07 09:26:28.646305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.303 NEW_FUNC[1/1]: 0x14f4858 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:07:33.303 #43 NEW cov: 12405 ft: 13829 corp: 5/515b lim: 320 exec/s: 0 rss: 74Mb L: 166/166 MS: 4 ChangeBit-InsertRepeatedBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:33.303 [2024-10-07 09:26:28.696282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.303 [2024-10-07 09:26:28.696309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.303 #44 NEW cov: 12405 ft: 13908 corp: 6/631b lim: 320 exec/s: 0 rss: 74Mb L: 116/166 MS: 1 CopyPart- 00:07:33.303 [2024-10-07 09:26:28.756439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.303 [2024-10-07 09:26:28.756464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.303 #45 NEW cov: 12405 ft: 14078 corp: 7/747b lim: 320 exec/s: 0 rss: 74Mb L: 116/166 MS: 1 ChangeByte- 00:07:33.303 [2024-10-07 09:26:28.796644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.303 [2024-10-07 09:26:28.796670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.303 [2024-10-07 09:26:28.796735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004087 cdw11:00000000 00:07:33.303 [2024-10-07 09:26:28.796749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.303 #46 NEW cov: 12407 ft: 14178 corp: 8/914b lim: 320 exec/s: 0 rss: 74Mb L: 167/167 MS: 1 CopyPart- 00:07:33.303 [2024-10-07 09:26:28.836648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.303 [2024-10-07 09:26:28.836672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.303 #47 NEW cov: 12407 ft: 14277 corp: 9/1038b lim: 320 exec/s: 0 rss: 74Mb L: 124/167 MS: 1 CMP- DE: "H\000\000\000\000\000\000\000"- 00:07:33.563 [2024-10-07 09:26:28.876943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:33.563 [2024-10-07 09:26:28.876969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.563 [2024-10-07 09:26:28.877029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:33.563 [2024-10-07 09:26:28.877043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.563 [2024-10-07 09:26:28.877101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:33.563 [2024-10-07 09:26:28.877118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.563 #48 NEW cov: 12407 ft: 14494 corp: 10/1244b lim: 320 exec/s: 0 rss: 74Mb L: 206/206 MS: 1 CopyPart- 00:07:33.563 [2024-10-07 09:26:28.937066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.563 [2024-10-07 09:26:28.937092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.563 [2024-10-07 09:26:28.937165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (78) qid:0 cid:5 nsid:78787878 cdw10:78787878 cdw11:78787878 00:07:33.563 [2024-10-07 09:26:28.937179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.563 #49 NEW cov: 12407 ft: 14588 corp: 11/1426b lim: 320 exec/s: 0 rss: 74Mb L: 182/206 MS: 1 InsertRepeatedBytes- 00:07:33.563 [2024-10-07 09:26:28.997259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.563 [2024-10-07 09:26:28.997286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.563 [2024-10-07 09:26:28.997337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004087 cdw11:00000000 00:07:33.563 [2024-10-07 09:26:28.997352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.563 #50 NEW cov: 12407 ft: 14610 corp: 12/1593b lim: 320 exec/s: 0 rss: 74Mb L: 167/206 MS: 1 ChangeBinInt- 00:07:33.563 [2024-10-07 09:26:29.057291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.563 [2024-10-07 09:26:29.057316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.563 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:33.563 #51 NEW cov: 12430 ft: 14649 corp: 13/1713b lim: 320 exec/s: 0 rss: 74Mb L: 120/206 MS: 1 CrossOver- 00:07:33.563 [2024-10-07 09:26:29.097424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.563 [2024-10-07 09:26:29.097451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.823 #52 NEW cov: 12430 ft: 14710 corp: 14/1837b lim: 320 exec/s: 0 rss: 75Mb L: 124/206 MS: 1 PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:07:33.823 [2024-10-07 09:26:29.157832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.823 [2024-10-07 09:26:29.157860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.823 [2024-10-07 09:26:29.157930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000048 cdw11:00000000 00:07:33.823 [2024-10-07 09:26:29.157945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.823 [2024-10-07 09:26:29.157996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.823 [2024-10-07 09:26:29.158010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.823 #53 NEW cov: 12430 ft: 14785 corp: 15/2049b lim: 320 exec/s: 53 rss: 75Mb L: 212/212 MS: 1 CopyPart- 00:07:33.823 [2024-10-07 09:26:29.197796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.823 [2024-10-07 09:26:29.197829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.823 [2024-10-07 09:26:29.197904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (78) qid:0 cid:5 nsid:78787878 cdw10:78787878 cdw11:78787878 00:07:33.823 [2024-10-07 09:26:29.197919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.823 #54 NEW cov: 12430 ft: 14804 corp: 16/2239b lim: 320 exec/s: 54 rss: 75Mb L: 190/212 MS: 1 PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:07:33.823 [2024-10-07 09:26:29.257876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.823 [2024-10-07 09:26:29.257901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.823 #55 NEW cov: 12430 ft: 14823 corp: 17/2364b lim: 320 exec/s: 55 rss: 75Mb L: 125/212 MS: 1 CopyPart- 00:07:33.823 [2024-10-07 09:26:29.297971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.823 [2024-10-07 09:26:29.297996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.823 #56 NEW cov: 12430 ft: 14831 corp: 18/2489b lim: 320 exec/s: 56 rss: 75Mb L: 125/212 MS: 1 ChangeBit- 00:07:33.823 [2024-10-07 09:26:29.358162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.823 [2024-10-07 09:26:29.358187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.823 #57 NEW cov: 12430 ft: 14846 corp: 19/2614b lim: 320 exec/s: 57 rss: 75Mb L: 125/212 MS: 1 ShuffleBytes- 00:07:34.082 [2024-10-07 09:26:29.398645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.398671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.082 [2024-10-07 09:26:29.398743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (78) qid:0 cid:5 nsid:78787878 cdw10:78787878 cdw11:78787878 00:07:34.082 [2024-10-07 09:26:29.398757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.082 [2024-10-07 09:26:29.398819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (79) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.398833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.082 [2024-10-07 09:26:29.398902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (78) qid:0 cid:7 nsid:78787878 cdw10:78787878 cdw11:78787878 00:07:34.082 [2024-10-07 09:26:29.398915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.082 #58 NEW cov: 12430 ft: 15479 corp: 20/2882b lim: 320 exec/s: 58 rss: 75Mb L: 268/268 MS: 1 CopyPart- 00:07:34.082 [2024-10-07 09:26:29.438489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.438515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.082 [2024-10-07 09:26:29.438573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (78) qid:0 cid:5 nsid:78787878 cdw10:78787878 cdw11:78787878 00:07:34.082 [2024-10-07 09:26:29.438587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.082 #59 NEW cov: 12430 ft: 15487 corp: 21/3064b lim: 320 exec/s: 59 rss: 75Mb L: 182/268 MS: 1 ChangeByte- 00:07:34.082 [2024-10-07 09:26:29.478483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.478508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.082 #60 NEW cov: 12430 ft: 15568 corp: 22/3183b lim: 320 exec/s: 60 rss: 75Mb L: 119/268 MS: 1 CrossOver- 00:07:34.082 [2024-10-07 09:26:29.538680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.538705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.082 #61 NEW cov: 12430 ft: 15579 corp: 23/3300b lim: 320 exec/s: 61 rss: 75Mb L: 117/268 MS: 1 InsertByte- 00:07:34.082 [2024-10-07 09:26:29.578877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.578901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.082 [2024-10-07 09:26:29.578974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (78) qid:0 cid:5 nsid:78787878 cdw10:78787878 cdw11:78787878 00:07:34.082 [2024-10-07 09:26:29.578988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.082 #62 NEW cov: 12430 ft: 15604 corp: 24/3482b lim: 320 exec/s: 62 rss: 75Mb L: 182/268 MS: 1 ShuffleBytes- 00:07:34.082 [2024-10-07 09:26:29.618843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.082 [2024-10-07 09:26:29.618870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.340 #63 NEW cov: 12430 ft: 15621 corp: 25/3607b lim: 320 exec/s: 63 rss: 75Mb L: 125/268 MS: 1 PersAutoDict- DE: "\377$\364\376\302n%\202"- 00:07:34.340 [2024-10-07 09:26:29.679119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:34.340 [2024-10-07 09:26:29.679145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.340 [2024-10-07 09:26:29.679208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:34.340 [2024-10-07 09:26:29.679222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.340 #64 NEW cov: 12430 ft: 15643 corp: 26/3773b lim: 320 exec/s: 64 rss: 75Mb L: 166/268 MS: 1 ChangeByte- 00:07:34.340 [2024-10-07 09:26:29.719131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.340 [2024-10-07 09:26:29.719155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.340 #65 NEW cov: 12430 ft: 15679 corp: 27/3899b lim: 320 exec/s: 65 rss: 75Mb L: 126/268 MS: 1 InsertByte- 00:07:34.340 [2024-10-07 09:26:29.779287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:0082256e cdw11:00790000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.340 [2024-10-07 09:26:29.779311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.340 #66 NEW cov: 12430 ft: 15681 corp: 28/4009b lim: 320 exec/s: 66 rss: 75Mb L: 110/268 MS: 1 EraseBytes- 00:07:34.340 [2024-10-07 09:26:29.839619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.340 [2024-10-07 09:26:29.839644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.340 [2024-10-07 09:26:29.839712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004087 cdw11:00000000 00:07:34.340 [2024-10-07 09:26:29.839726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.340 #67 NEW cov: 12430 ft: 15693 corp: 29/4176b lim: 320 exec/s: 67 rss: 75Mb L: 167/268 MS: 1 ChangeBinInt- 00:07:34.340 [2024-10-07 09:26:29.879719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:0082256e cdw11:00790000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.340 [2024-10-07 09:26:29.879745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.340 [2024-10-07 09:26:29.879809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:790000 cdw10:00000000 cdw11:00000000 00:07:34.340 [2024-10-07 09:26:29.879831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.600 #68 NEW cov: 12430 ft: 15695 corp: 30/4337b lim: 320 exec/s: 68 rss: 75Mb L: 161/268 MS: 1 CrossOver- 00:07:34.601 [2024-10-07 09:26:29.939736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:34.601 [2024-10-07 09:26:29.939760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.601 #69 NEW cov: 12430 ft: 15751 corp: 31/4458b lim: 320 exec/s: 69 rss: 75Mb L: 121/268 MS: 1 EraseBytes- 00:07:34.601 [2024-10-07 09:26:30.000201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.601 [2024-10-07 09:26:30.000227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.601 [2024-10-07 09:26:30.000296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:9a9a9a00 cdw10:9a9a9a9a cdw11:9a9a9a9a 00:07:34.601 [2024-10-07 09:26:30.000311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.601 [2024-10-07 09:26:30.000370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (9a) qid:0 cid:6 nsid:9a9a9a9a cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x9a9a9a9a9a9a9a 00:07:34.601 [2024-10-07 09:26:30.000384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.601 #70 NEW cov: 12430 ft: 15775 corp: 32/4665b lim: 320 exec/s: 70 rss: 75Mb L: 207/268 MS: 1 InsertRepeatedBytes- 00:07:34.601 [2024-10-07 09:26:30.060178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:87000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.601 [2024-10-07 09:26:30.060210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.601 #71 NEW cov: 12430 ft: 15859 corp: 33/4781b lim: 320 exec/s: 71 rss: 75Mb L: 116/268 MS: 1 CrossOver- 00:07:34.601 [2024-10-07 09:26:30.100280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.601 [2024-10-07 09:26:30.100309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.601 #72 NEW cov: 12430 ft: 15912 corp: 34/4906b lim: 320 exec/s: 72 rss: 76Mb L: 125/268 MS: 1 ChangeBit- 00:07:34.601 [2024-10-07 09:26:30.160417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (25) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.601 [2024-10-07 09:26:30.160446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.860 #73 NEW cov: 12430 ft: 15989 corp: 35/4986b lim: 320 exec/s: 36 rss: 76Mb L: 80/268 MS: 1 CrossOver- 00:07:34.860 #73 DONE cov: 12430 ft: 15989 corp: 35/4986b lim: 320 exec/s: 36 rss: 76Mb 00:07:34.860 ###### Recommended dictionary. ###### 00:07:34.860 "\377$\364\376\302n%\202" # Uses: 1 00:07:34.860 "H\000\000\000\000\000\000\000" # Uses: 2 00:07:34.860 ###### End of recommended dictionary. ###### 00:07:34.860 Done 73 runs in 2 second(s) 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.860 09:26:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:34.860 [2024-10-07 09:26:30.367628] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:34.860 [2024-10-07 09:26:30.367709] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486901 ] 00:07:35.120 [2024-10-07 09:26:30.655347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.380 [2024-10-07 09:26:30.741534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.380 [2024-10-07 09:26:30.800460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.380 [2024-10-07 09:26:30.816662] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:35.380 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.380 INFO: Seed: 2849702367 00:07:35.380 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:35.380 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:35.380 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:35.380 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.380 #2 INITED exec/s: 0 rss: 67Mb 00:07:35.380 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.380 This may also happen if the target rejected all inputs we tried so far 00:07:35.380 [2024-10-07 09:26:30.872039] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.380 [2024-10-07 09:26:30.872181] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.380 [2024-10-07 09:26:30.872299] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.380 [2024-10-07 09:26:30.872412] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.380 [2024-10-07 09:26:30.872645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.380 [2024-10-07 09:26:30.872676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.380 [2024-10-07 09:26:30.872734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.380 [2024-10-07 09:26:30.872749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.380 [2024-10-07 09:26:30.872803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.380 [2024-10-07 09:26:30.872823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.380 [2024-10-07 09:26:30.872877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.380 [2024-10-07 09:26:30.872892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.640 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:35.640 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.640 #8 NEW cov: 12232 ft: 12228 corp: 2/29b lim: 30 exec/s: 0 rss: 74Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:35.640 [2024-10-07 09:26:31.193042] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.640 [2024-10-07 09:26:31.193186] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.640 [2024-10-07 09:26:31.193304] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.640 [2024-10-07 09:26:31.193418] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.640 [2024-10-07 09:26:31.193642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.640 [2024-10-07 09:26:31.193684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.640 [2024-10-07 09:26:31.193753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.640 [2024-10-07 09:26:31.193774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.640 [2024-10-07 09:26:31.193854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.640 [2024-10-07 09:26:31.193873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.640 [2024-10-07 09:26:31.193942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.640 [2024-10-07 09:26:31.193965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.901 #9 NEW cov: 12348 ft: 12844 corp: 3/58b lim: 30 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 CopyPart- 00:07:35.901 [2024-10-07 09:26:31.253082] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.253221] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.253336] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.253446] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.253561] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:35.901 [2024-10-07 09:26:31.253793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.253826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.253884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.253899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.253957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.253971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.254028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.254042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.254099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.254113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.901 #10 NEW cov: 12354 ft: 13189 corp: 4/88b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:07:35.901 [2024-10-07 09:26:31.313173] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e2e7 00:07:35.901 [2024-10-07 09:26:31.313311] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.313429] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.313541] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.313762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.313788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.313849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.313864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.313932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.313946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.314003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.314017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.901 #16 NEW cov: 12439 ft: 13555 corp: 5/117b lim: 30 exec/s: 0 rss: 74Mb L: 29/30 MS: 1 ChangeByte- 00:07:35.901 [2024-10-07 09:26:31.353315] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.353438] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.353553] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.353666] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.353779] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:35.901 [2024-10-07 09:26:31.354011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.354038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.354096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.354111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.354167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.354181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.354239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.354252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.354307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.354321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.901 #17 NEW cov: 12439 ft: 13665 corp: 6/147b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:35.901 [2024-10-07 09:26:31.413485] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.413609] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.413725] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000066e7 00:07:35.901 [2024-10-07 09:26:31.413840] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:35.901 [2024-10-07 09:26:31.413952] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:35.901 [2024-10-07 09:26:31.414172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.414196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.414255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.414270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.414328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.414342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.414399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.414413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.901 [2024-10-07 09:26:31.414468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.901 [2024-10-07 09:26:31.414481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.901 #18 NEW cov: 12439 ft: 13739 corp: 7/177b lim: 30 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:07:36.163 [2024-10-07 09:26:31.473615] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.473754] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.473878] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.474109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.474135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.474195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.474210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.474269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.474283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.163 #19 NEW cov: 12439 ft: 14328 corp: 8/198b lim: 30 exec/s: 0 rss: 74Mb L: 21/30 MS: 1 CrossOver- 00:07:36.163 [2024-10-07 09:26:31.513681] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.513801] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.514021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.514046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.514106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.514120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.163 #20 NEW cov: 12439 ft: 14662 corp: 9/213b lim: 30 exec/s: 0 rss: 74Mb L: 15/30 MS: 1 EraseBytes- 00:07:36.163 [2024-10-07 09:26:31.553853] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e2e7 00:07:36.163 [2024-10-07 09:26:31.553991] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.554107] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.554220] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.554447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.554473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.554531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.554545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.554602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.554617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.554674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.554688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.163 #21 NEW cov: 12439 ft: 14725 corp: 10/242b lim: 30 exec/s: 0 rss: 75Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:36.163 [2024-10-07 09:26:31.613955] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.614077] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.614297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.614322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.163 [2024-10-07 09:26:31.614382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.614396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.163 #22 NEW cov: 12439 ft: 14754 corp: 11/257b lim: 30 exec/s: 0 rss: 75Mb L: 15/30 MS: 1 ChangeBinInt- 00:07:36.163 [2024-10-07 09:26:31.674284] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e2e7 00:07:36.163 [2024-10-07 09:26:31.674406] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.674522] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.674635] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.163 [2024-10-07 09:26:31.674747] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:36.163 [2024-10-07 09:26:31.674980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-10-07 09:26:31.675006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.164 [2024-10-07 09:26:31.675065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-10-07 09:26:31.675080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.164 [2024-10-07 09:26:31.675135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-10-07 09:26:31.675149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.164 [2024-10-07 09:26:31.675208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-10-07 09:26:31.675223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.164 [2024-10-07 09:26:31.675281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-10-07 09:26:31.675295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.164 #23 NEW cov: 12439 ft: 14818 corp: 12/287b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 CrossOver- 00:07:36.164 [2024-10-07 09:26:31.714244] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.164 [2024-10-07 09:26:31.714365] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.164 [2024-10-07 09:26:31.714582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-10-07 09:26:31.714607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.164 [2024-10-07 09:26:31.714666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-10-07 09:26:31.714679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.425 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:36.425 #24 NEW cov: 12462 ft: 14856 corp: 13/302b lim: 30 exec/s: 0 rss: 75Mb L: 15/30 MS: 1 ChangeBit- 00:07:36.425 [2024-10-07 09:26:31.774528] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.774651] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.774779] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.774941] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.775057] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:36.425 [2024-10-07 09:26:31.775282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.775308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.775365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e74383e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.775379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.775438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.775453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.775511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.775525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.775582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.775599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.425 #25 NEW cov: 12462 ft: 14918 corp: 14/332b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:07:36.425 [2024-10-07 09:26:31.814638] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.814772] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e9 00:07:36.425 [2024-10-07 09:26:31.814907] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.815018] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.815134] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:36.425 [2024-10-07 09:26:31.815349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.815374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.815432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.815446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.815502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.815515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.815573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.815586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.815639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.815653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.425 #26 NEW cov: 12462 ft: 14941 corp: 15/362b lim: 30 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:07:36.425 [2024-10-07 09:26:31.854708] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.854857] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.854984] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.855095] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.855329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.855354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.855413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e71c83e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.855427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.855485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.855498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.425 [2024-10-07 09:26:31.855556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.855570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.425 #27 NEW cov: 12462 ft: 15008 corp: 16/390b lim: 30 exec/s: 27 rss: 75Mb L: 28/30 MS: 1 ChangeBinInt- 00:07:36.425 [2024-10-07 09:26:31.894748] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.895016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.895041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.425 #28 NEW cov: 12462 ft: 15486 corp: 17/398b lim: 30 exec/s: 28 rss: 75Mb L: 8/30 MS: 1 EraseBytes- 00:07:36.425 [2024-10-07 09:26:31.954921] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.425 [2024-10-07 09:26:31.955138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7f783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.425 [2024-10-07 09:26:31.955163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.685 #29 NEW cov: 12462 ft: 15567 corp: 18/406b lim: 30 exec/s: 29 rss: 75Mb L: 8/30 MS: 1 ChangeBit- 00:07:36.685 [2024-10-07 09:26:32.015175] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.685 [2024-10-07 09:26:32.015297] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.685 [2024-10-07 09:26:32.015410] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.685 [2024-10-07 09:26:32.015523] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.685 [2024-10-07 09:26:32.015735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.685 [2024-10-07 09:26:32.015760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.685 [2024-10-07 09:26:32.015823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.685 [2024-10-07 09:26:32.015838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.685 [2024-10-07 09:26:32.015895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.685 [2024-10-07 09:26:32.015909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.685 [2024-10-07 09:26:32.015968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.685 [2024-10-07 09:26:32.015982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.686 #30 NEW cov: 12462 ft: 15610 corp: 19/434b lim: 30 exec/s: 30 rss: 75Mb L: 28/30 MS: 1 CrossOver- 00:07:36.686 [2024-10-07 09:26:32.055354] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.055471] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.055581] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.055693] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000e7e7 00:07:36.686 [2024-10-07 09:26:32.055919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.055948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.056007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.056021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.056078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.056092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.056149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e781e7 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.056162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.686 #31 NEW cov: 12462 ft: 15619 corp: 20/463b lim: 30 exec/s: 31 rss: 75Mb L: 29/30 MS: 1 InsertByte- 00:07:36.686 [2024-10-07 09:26:32.115537] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.115675] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (25700) > buf size (4096) 00:07:36.686 [2024-10-07 09:26:32.115788] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.115918] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.116030] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:36.686 [2024-10-07 09:26:32.116248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.116273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.116330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:19180018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.116345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.116403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:181c83e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.116417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.116474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.116488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.116544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.116558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.686 #32 NEW cov: 12485 ft: 15675 corp: 21/493b lim: 30 exec/s: 32 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:36.686 [2024-10-07 09:26:32.155674] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.155794] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (25700) > buf size (4096) 00:07:36.686 [2024-10-07 09:26:32.155914] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e719 00:07:36.686 [2024-10-07 09:26:32.156042] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.156156] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:36.686 [2024-10-07 09:26:32.156383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.156408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.156467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:19180018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.156481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.156539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:181c83e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.156553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.156609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:21e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.156622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.686 [2024-10-07 09:26:32.156681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.156694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.686 #33 NEW cov: 12485 ft: 15711 corp: 22/523b lim: 30 exec/s: 33 rss: 75Mb L: 30/30 MS: 1 ChangeBinInt- 00:07:36.686 [2024-10-07 09:26:32.215716] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.686 [2024-10-07 09:26:32.215969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.686 [2024-10-07 09:26:32.215995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.686 #34 NEW cov: 12485 ft: 15756 corp: 23/530b lim: 30 exec/s: 34 rss: 75Mb L: 7/30 MS: 1 CrossOver- 00:07:36.946 [2024-10-07 09:26:32.255914] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000018e7 00:07:36.946 [2024-10-07 09:26:32.256048] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.256162] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.256274] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.256503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e702e7 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.256529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.946 [2024-10-07 09:26:32.256589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.256604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.946 [2024-10-07 09:26:32.256662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.256676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.946 [2024-10-07 09:26:32.256739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.256753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.946 #35 NEW cov: 12485 ft: 15789 corp: 24/558b lim: 30 exec/s: 35 rss: 75Mb L: 28/30 MS: 1 ChangeBinInt- 00:07:36.946 [2024-10-07 09:26:32.295940] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.296078] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.296295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.296321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.946 [2024-10-07 09:26:32.296382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e78324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.296397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.946 #36 NEW cov: 12485 ft: 15796 corp: 25/574b lim: 30 exec/s: 36 rss: 75Mb L: 16/30 MS: 1 InsertByte- 00:07:36.946 [2024-10-07 09:26:32.336093] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.336229] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.336347] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.946 [2024-10-07 09:26:32.336566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.336592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.946 [2024-10-07 09:26:32.336653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.336667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.946 [2024-10-07 09:26:32.336726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.946 [2024-10-07 09:26:32.336740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.947 #37 NEW cov: 12485 ft: 15809 corp: 26/596b lim: 30 exec/s: 37 rss: 75Mb L: 22/30 MS: 1 EraseBytes- 00:07:36.947 [2024-10-07 09:26:32.376220] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000018e7 00:07:36.947 [2024-10-07 09:26:32.376342] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.376454] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.376566] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.376783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e702e7 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.376809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.376872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.376887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.376947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.376961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.377019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.377032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.947 #38 NEW cov: 12485 ft: 15826 corp: 27/624b lim: 30 exec/s: 38 rss: 75Mb L: 28/30 MS: 1 ShuffleBytes- 00:07:36.947 [2024-10-07 09:26:32.436443] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (236548) > buf size (4096) 00:07:36.947 [2024-10-07 09:26:32.436579] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.436696] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.436810] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.437066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.437091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.437150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000831c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.437164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.437220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.437235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.437294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.437308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.947 #39 NEW cov: 12485 ft: 15833 corp: 28/652b lim: 30 exec/s: 39 rss: 75Mb L: 28/30 MS: 1 ChangeBinInt- 00:07:36.947 [2024-10-07 09:26:32.496639] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.496760] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:36.947 [2024-10-07 09:26:32.497001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.497028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.497090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000483e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.497104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.947 [2024-10-07 09:26:32.497164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.947 [2024-10-07 09:26:32.497178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.207 #40 NEW cov: 12502 ft: 15857 corp: 29/674b lim: 30 exec/s: 40 rss: 75Mb L: 22/30 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\004"- 00:07:37.207 [2024-10-07 09:26:32.556748] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e2e7 00:07:37.208 [2024-10-07 09:26:32.556889] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.557002] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.557112] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.557333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.557359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.557418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0ae783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.557433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.557492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.557507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.557564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.557578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.208 #41 NEW cov: 12502 ft: 15863 corp: 30/703b lim: 30 exec/s: 41 rss: 75Mb L: 29/30 MS: 1 CopyPart- 00:07:37.208 [2024-10-07 09:26:32.596820] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xe7e7 00:07:37.208 [2024-10-07 09:26:32.596940] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.597067] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.597283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e70006 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.597309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.597368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.597383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.597441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.597456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.208 #42 NEW cov: 12502 ft: 15891 corp: 31/724b lim: 30 exec/s: 42 rss: 76Mb L: 21/30 MS: 1 CMP- DE: "\006\000"- 00:07:37.208 [2024-10-07 09:26:32.657107] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.657227] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xe7e7 00:07:37.208 [2024-10-07 09:26:32.657337] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.657446] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.208 [2024-10-07 09:26:32.657563] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e70a 00:07:37.208 [2024-10-07 09:26:32.657784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.657818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.657893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e700e7 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.657908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.657965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.657979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.658036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.658051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.658108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.658122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.208 #43 NEW cov: 12502 ft: 15901 corp: 32/754b lim: 30 exec/s: 43 rss: 76Mb L: 30/30 MS: 1 ChangeByte- 00:07:37.208 [2024-10-07 09:26:32.697012] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e741 00:07:37.208 [2024-10-07 09:26:32.697235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7f783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.697260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.208 #44 NEW cov: 12502 ft: 15915 corp: 33/763b lim: 30 exec/s: 44 rss: 76Mb L: 9/30 MS: 1 InsertByte- 00:07:37.208 [2024-10-07 09:26:32.757281] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f2f2 00:07:37.208 [2024-10-07 09:26:32.757422] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000f2f2 00:07:37.208 [2024-10-07 09:26:32.757538] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000f2f2 00:07:37.208 [2024-10-07 09:26:32.757649] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000f2f2 00:07:37.208 [2024-10-07 09:26:32.757893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.757919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.757979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f2f202f2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.757993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.758051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f2f202f2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.758065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.208 [2024-10-07 09:26:32.758123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:f2f202f2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.208 [2024-10-07 09:26:32.758137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.485 #45 NEW cov: 12502 ft: 15926 corp: 34/792b lim: 30 exec/s: 45 rss: 76Mb L: 29/30 MS: 1 InsertRepeatedBytes- 00:07:37.485 [2024-10-07 09:26:32.797313] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.485 [2024-10-07 09:26:32.797433] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000e7e7 00:07:37.485 [2024-10-07 09:26:32.797648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.485 [2024-10-07 09:26:32.797673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.486 [2024-10-07 09:26:32.797733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e7e783e7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.486 [2024-10-07 09:26:32.797747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.486 #46 NEW cov: 12502 ft: 15930 corp: 35/807b lim: 30 exec/s: 46 rss: 76Mb L: 15/30 MS: 1 ShuffleBytes- 00:07:37.486 [2024-10-07 09:26:32.837412] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (192528) > buf size (4096) 00:07:37.486 [2024-10-07 09:26:32.837654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:bc030000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.486 [2024-10-07 09:26:32.837680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.486 #47 NEW cov: 12502 ft: 15940 corp: 36/816b lim: 30 exec/s: 23 rss: 76Mb L: 9/30 MS: 1 CMP- DE: "\274\003\000\000\000\000\000\000"- 00:07:37.486 #47 DONE cov: 12502 ft: 15940 corp: 36/816b lim: 30 exec/s: 23 rss: 76Mb 00:07:37.486 ###### Recommended dictionary. ###### 00:07:37.486 "\001\000\000\000\000\000\000\004" # Uses: 0 00:07:37.486 "\006\000" # Uses: 0 00:07:37.486 "\274\003\000\000\000\000\000\000" # Uses: 0 00:07:37.486 ###### End of recommended dictionary. ###### 00:07:37.486 Done 47 runs in 2 second(s) 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.486 09:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:37.745 [2024-10-07 09:26:33.068606] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:37.745 [2024-10-07 09:26:33.068689] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487270 ] 00:07:38.005 [2024-10-07 09:26:33.342099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.005 [2024-10-07 09:26:33.431585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.005 [2024-10-07 09:26:33.490514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.005 [2024-10-07 09:26:33.506715] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:38.005 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.005 INFO: Seed: 1243731315 00:07:38.005 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:38.005 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:38.005 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:38.005 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.005 #2 INITED exec/s: 0 rss: 67Mb 00:07:38.005 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.005 This may also happen if the target rejected all inputs we tried so far 00:07:38.005 [2024-10-07 09:26:33.562233] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:38.005 [2024-10-07 09:26:33.562360] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:38.005 [2024-10-07 09:26:33.562480] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:38.005 [2024-10-07 09:26:33.562706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.005 [2024-10-07 09:26:33.562735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.005 [2024-10-07 09:26:33.562794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.005 [2024-10-07 09:26:33.562811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.005 [2024-10-07 09:26:33.562888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.005 [2024-10-07 09:26:33.562905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.005 [2024-10-07 09:26:33.562960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.005 [2024-10-07 09:26:33.562976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.351 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:38.351 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.351 #11 NEW cov: 12202 ft: 12193 corp: 2/31b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 4 CopyPart-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:38.610 [2024-10-07 09:26:33.914996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:33.915049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.610 #14 NEW cov: 12315 ft: 13509 corp: 3/42b lim: 35 exec/s: 0 rss: 74Mb L: 11/30 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:38.610 [2024-10-07 09:26:33.975386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:33.975413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.610 [2024-10-07 09:26:33.975505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:33.975523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.610 #15 NEW cov: 12321 ft: 13988 corp: 4/61b lim: 35 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 CopyPart- 00:07:38.610 [2024-10-07 09:26:34.045365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7a07000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:34.045393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.610 #21 NEW cov: 12406 ft: 14267 corp: 5/72b lim: 35 exec/s: 0 rss: 74Mb L: 11/30 MS: 1 ChangeByte- 00:07:38.610 [2024-10-07 09:26:34.095817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:34.095844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.610 [2024-10-07 09:26:34.095951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:34.095968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.610 #22 NEW cov: 12406 ft: 14373 corp: 6/91b lim: 35 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 CrossOver- 00:07:38.610 [2024-10-07 09:26:34.165980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:34.166005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.610 [2024-10-07 09:26:34.166101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.610 [2024-10-07 09:26:34.166117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.869 #23 NEW cov: 12406 ft: 14411 corp: 7/110b lim: 35 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 ShuffleBytes- 00:07:38.869 [2024-10-07 09:26:34.216216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.869 [2024-10-07 09:26:34.216242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.869 [2024-10-07 09:26:34.216334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:7a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.869 [2024-10-07 09:26:34.216353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.869 #24 NEW cov: 12406 ft: 14487 corp: 8/129b lim: 35 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 CopyPart- 00:07:38.870 [2024-10-07 09:26:34.286161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.870 [2024-10-07 09:26:34.286188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.870 #25 NEW cov: 12406 ft: 14524 corp: 9/136b lim: 35 exec/s: 0 rss: 74Mb L: 7/30 MS: 1 EraseBytes- 00:07:38.870 [2024-10-07 09:26:34.356811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.870 [2024-10-07 09:26:34.356845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.870 [2024-10-07 09:26:34.356940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.870 [2024-10-07 09:26:34.356957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.870 #26 NEW cov: 12406 ft: 14612 corp: 10/155b lim: 35 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 ChangeBit- 00:07:38.870 [2024-10-07 09:26:34.406719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2e07000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.870 [2024-10-07 09:26:34.406746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.870 #27 NEW cov: 12406 ft: 14704 corp: 11/166b lim: 35 exec/s: 0 rss: 74Mb L: 11/30 MS: 1 ChangeByte- 00:07:39.129 [2024-10-07 09:26:34.456841] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:39.129 [2024-10-07 09:26:34.457369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.457399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.129 [2024-10-07 09:26:34.457499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.457518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.129 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:39.129 #28 NEW cov: 12429 ft: 14787 corp: 12/185b lim: 35 exec/s: 0 rss: 74Mb L: 19/30 MS: 1 ChangeBinInt- 00:07:39.129 [2024-10-07 09:26:34.527620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007a cdw11:ff007aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.527648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.129 [2024-10-07 09:26:34.527744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.527761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.129 #29 NEW cov: 12429 ft: 14801 corp: 13/204b lim: 35 exec/s: 29 rss: 74Mb L: 19/30 MS: 1 CopyPart- 00:07:39.129 [2024-10-07 09:26:34.597833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.597859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.129 [2024-10-07 09:26:34.597952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.597968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.129 #30 NEW cov: 12429 ft: 14820 corp: 14/223b lim: 35 exec/s: 30 rss: 74Mb L: 19/30 MS: 1 ChangeBit- 00:07:39.129 [2024-10-07 09:26:34.648601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.648629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.129 [2024-10-07 09:26:34.648727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.648743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.129 [2024-10-07 09:26:34.648850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:010000ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.648866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.129 [2024-10-07 09:26:34.648960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:f7ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.129 [2024-10-07 09:26:34.648977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.129 #31 NEW cov: 12429 ft: 14851 corp: 15/251b lim: 35 exec/s: 31 rss: 74Mb L: 28/30 MS: 1 CrossOver- 00:07:39.388 [2024-10-07 09:26:34.718032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1a7e000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.718059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 #34 NEW cov: 12429 ft: 14907 corp: 16/258b lim: 35 exec/s: 34 rss: 75Mb L: 7/30 MS: 3 EraseBytes-ChangeByte-InsertByte- 00:07:39.388 [2024-10-07 09:26:34.788645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007a cdw11:ff007aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.788671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 [2024-10-07 09:26:34.788754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.788772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.388 #35 NEW cov: 12429 ft: 14916 corp: 17/272b lim: 35 exec/s: 35 rss: 75Mb L: 14/30 MS: 1 EraseBytes- 00:07:39.388 [2024-10-07 09:26:34.858495] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:39.388 [2024-10-07 09:26:34.858975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:0100ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.859003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 [2024-10-07 09:26:34.859094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:1fff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.859115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.388 #36 NEW cov: 12429 ft: 14965 corp: 18/291b lim: 35 exec/s: 36 rss: 75Mb L: 19/30 MS: 1 CMP- DE: "\001\000\000\037"- 00:07:39.388 [2024-10-07 09:26:34.908600] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:39.388 [2024-10-07 09:26:34.909085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ff7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.909113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.388 [2024-10-07 09:26:34.909209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:05000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.388 [2024-10-07 09:26:34.909233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.388 #37 NEW cov: 12429 ft: 14970 corp: 19/310b lim: 35 exec/s: 37 rss: 75Mb L: 19/30 MS: 1 ChangeBinInt- 00:07:39.648 [2024-10-07 09:26:34.959489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.648 [2024-10-07 09:26:34.959515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.648 #38 NEW cov: 12429 ft: 15313 corp: 20/325b lim: 35 exec/s: 38 rss: 75Mb L: 15/30 MS: 1 PersAutoDict- DE: "\001\000\000\037"- 00:07:39.648 [2024-10-07 09:26:35.009541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:7a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.648 [2024-10-07 09:26:35.009566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.648 [2024-10-07 09:26:35.009652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:7aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.648 [2024-10-07 09:26:35.009668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.648 #39 NEW cov: 12429 ft: 15323 corp: 21/341b lim: 35 exec/s: 39 rss: 75Mb L: 16/30 MS: 1 CrossOver- 00:07:39.648 [2024-10-07 09:26:35.059467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.649 [2024-10-07 09:26:35.059494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.649 #40 NEW cov: 12429 ft: 15356 corp: 22/352b lim: 35 exec/s: 40 rss: 75Mb L: 11/30 MS: 1 ShuffleBytes- 00:07:39.649 [2024-10-07 09:26:35.109973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.649 [2024-10-07 09:26:35.109998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.649 #41 NEW cov: 12429 ft: 15368 corp: 23/367b lim: 35 exec/s: 41 rss: 75Mb L: 15/30 MS: 1 ChangeByte- 00:07:39.649 [2024-10-07 09:26:35.179981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01000021 cdw11:0a00001f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.649 [2024-10-07 09:26:35.180008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.649 #43 NEW cov: 12429 ft: 15382 corp: 24/380b lim: 35 exec/s: 43 rss: 75Mb L: 13/30 MS: 2 ChangeByte-CrossOver- 00:07:39.908 [2024-10-07 09:26:35.230525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:00000185 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.908 [2024-10-07 09:26:35.230551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.908 [2024-10-07 09:26:35.230642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:ff0008ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.908 [2024-10-07 09:26:35.230658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.908 #44 NEW cov: 12429 ft: 15391 corp: 25/399b lim: 35 exec/s: 44 rss: 75Mb L: 19/30 MS: 1 ChangeBinInt- 00:07:39.908 [2024-10-07 09:26:35.280413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:1a7e000a cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.908 [2024-10-07 09:26:35.280452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.908 #45 NEW cov: 12429 ft: 15405 corp: 26/410b lim: 35 exec/s: 45 rss: 75Mb L: 11/30 MS: 1 PersAutoDict- DE: "\001\000\000\037"- 00:07:39.908 [2024-10-07 09:26:35.351001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff007a cdw11:ff007aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.908 [2024-10-07 09:26:35.351028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.908 [2024-10-07 09:26:35.351135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.908 [2024-10-07 09:26:35.351153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.908 #46 NEW cov: 12429 ft: 15426 corp: 27/429b lim: 35 exec/s: 46 rss: 75Mb L: 19/30 MS: 1 ShuffleBytes- 00:07:39.908 [2024-10-07 09:26:35.400949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:7aff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.908 [2024-10-07 09:26:35.400979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.909 #47 NEW cov: 12429 ft: 15449 corp: 28/440b lim: 35 exec/s: 47 rss: 75Mb L: 11/30 MS: 1 ChangeBit- 00:07:39.909 [2024-10-07 09:26:35.451547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:7aff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.909 [2024-10-07 09:26:35.451573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.168 #48 NEW cov: 12429 ft: 15459 corp: 29/455b lim: 35 exec/s: 48 rss: 75Mb L: 15/30 MS: 1 CrossOver- 00:07:40.168 [2024-10-07 09:26:35.501334] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.168 [2024-10-07 09:26:35.501615] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:40.168 [2024-10-07 09:26:35.502334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.168 [2024-10-07 09:26:35.502363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.168 [2024-10-07 09:26:35.502452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.168 [2024-10-07 09:26:35.502471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.168 [2024-10-07 09:26:35.502562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.168 [2024-10-07 09:26:35.502580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.168 [2024-10-07 09:26:35.502673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:40.168 [2024-10-07 09:26:35.502691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.168 #49 NEW cov: 12429 ft: 15533 corp: 30/485b lim: 35 exec/s: 24 rss: 75Mb L: 30/30 MS: 1 ChangeByte- 00:07:40.168 #49 DONE cov: 12429 ft: 15533 corp: 30/485b lim: 35 exec/s: 24 rss: 75Mb 00:07:40.168 ###### Recommended dictionary. ###### 00:07:40.168 "\001\000\000\037" # Uses: 2 00:07:40.168 ###### End of recommended dictionary. ###### 00:07:40.168 Done 49 runs in 2 second(s) 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:40.168 09:26:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:40.169 [2024-10-07 09:26:35.728618] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:40.169 [2024-10-07 09:26:35.728691] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487699 ] 00:07:40.738 [2024-10-07 09:26:36.012717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.738 [2024-10-07 09:26:36.108532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.738 [2024-10-07 09:26:36.167924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.738 [2024-10-07 09:26:36.184140] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:40.738 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.738 INFO: Seed: 3921724885 00:07:40.738 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:40.738 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:40.738 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:40.738 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.738 #2 INITED exec/s: 0 rss: 67Mb 00:07:40.738 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.738 This may also happen if the target rejected all inputs we tried so far 00:07:41.257 NEW_FUNC[1/703]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:41.257 NEW_FUNC[2/703]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:41.257 #6 NEW cov: 12070 ft: 12052 corp: 2/5b lim: 20 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ShuffleBytes-CopyPart-CrossOver-CopyPart- 00:07:41.257 #7 NEW cov: 12201 ft: 12678 corp: 3/10b lim: 20 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:41.257 #13 NEW cov: 12207 ft: 13010 corp: 4/16b lim: 20 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:07:41.257 #14 NEW cov: 12292 ft: 13252 corp: 5/20b lim: 20 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 EraseBytes- 00:07:41.257 #15 NEW cov: 12292 ft: 13366 corp: 6/24b lim: 20 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 ChangeBinInt- 00:07:41.515 #16 NEW cov: 12292 ft: 13447 corp: 7/28b lim: 20 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 ShuffleBytes- 00:07:41.515 #20 NEW cov: 12292 ft: 13543 corp: 8/34b lim: 20 exec/s: 0 rss: 74Mb L: 6/6 MS: 4 CopyPart-CopyPart-CopyPart-InsertRepeatedBytes- 00:07:41.515 #21 NEW cov: 12292 ft: 13586 corp: 9/39b lim: 20 exec/s: 0 rss: 74Mb L: 5/6 MS: 1 ChangeBit- 00:07:41.515 #23 NEW cov: 12292 ft: 13621 corp: 10/44b lim: 20 exec/s: 0 rss: 74Mb L: 5/6 MS: 2 ShuffleBytes-CrossOver- 00:07:41.775 #24 NEW cov: 12292 ft: 13657 corp: 11/48b lim: 20 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 ChangeBinInt- 00:07:41.775 [2024-10-07 09:26:37.116354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:41.775 [2024-10-07 09:26:37.116399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.775 NEW_FUNC[1/21]: 0x132df88 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:07:41.776 NEW_FUNC[2/21]: 0x132eb08 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:07:41.776 #28 NEW cov: 12666 ft: 14450 corp: 12/64b lim: 20 exec/s: 0 rss: 75Mb L: 16/16 MS: 4 EraseBytes-ChangeBinInt-EraseBytes-InsertRepeatedBytes- 00:07:41.776 #29 NEW cov: 12666 ft: 14496 corp: 13/69b lim: 20 exec/s: 29 rss: 75Mb L: 5/16 MS: 1 CrossOver- 00:07:41.776 #30 NEW cov: 12666 ft: 14564 corp: 14/73b lim: 20 exec/s: 30 rss: 75Mb L: 4/16 MS: 1 CrossOver- 00:07:42.034 #31 NEW cov: 12666 ft: 14604 corp: 15/77b lim: 20 exec/s: 31 rss: 75Mb L: 4/16 MS: 1 CrossOver- 00:07:42.034 #32 NEW cov: 12666 ft: 14617 corp: 16/82b lim: 20 exec/s: 32 rss: 75Mb L: 5/16 MS: 1 ShuffleBytes- 00:07:42.034 #33 NEW cov: 12666 ft: 14637 corp: 17/88b lim: 20 exec/s: 33 rss: 75Mb L: 6/16 MS: 1 ChangeBit- 00:07:42.034 #34 NEW cov: 12674 ft: 14779 corp: 18/100b lim: 20 exec/s: 34 rss: 75Mb L: 12/16 MS: 1 InsertRepeatedBytes- 00:07:42.034 #35 NEW cov: 12674 ft: 14850 corp: 19/105b lim: 20 exec/s: 35 rss: 75Mb L: 5/16 MS: 1 CopyPart- 00:07:42.294 #36 NEW cov: 12674 ft: 14879 corp: 20/109b lim: 20 exec/s: 36 rss: 75Mb L: 4/16 MS: 1 ChangeBinInt- 00:07:42.294 #37 NEW cov: 12674 ft: 14914 corp: 21/114b lim: 20 exec/s: 37 rss: 75Mb L: 5/16 MS: 1 InsertByte- 00:07:42.294 #38 NEW cov: 12675 ft: 15125 corp: 22/122b lim: 20 exec/s: 38 rss: 75Mb L: 8/16 MS: 1 CrossOver- 00:07:42.294 #39 NEW cov: 12675 ft: 15138 corp: 23/131b lim: 20 exec/s: 39 rss: 75Mb L: 9/16 MS: 1 CrossOver- 00:07:42.553 #40 NEW cov: 12675 ft: 15151 corp: 24/139b lim: 20 exec/s: 40 rss: 75Mb L: 8/16 MS: 1 ChangeBit- 00:07:42.553 #41 NEW cov: 12675 ft: 15175 corp: 25/145b lim: 20 exec/s: 41 rss: 75Mb L: 6/16 MS: 1 ChangeBinInt- 00:07:42.553 #42 NEW cov: 12675 ft: 15209 corp: 26/155b lim: 20 exec/s: 42 rss: 75Mb L: 10/16 MS: 1 CrossOver- 00:07:42.553 #43 NEW cov: 12675 ft: 15210 corp: 27/160b lim: 20 exec/s: 43 rss: 75Mb L: 5/16 MS: 1 ShuffleBytes- 00:07:42.812 #44 NEW cov: 12675 ft: 15213 corp: 28/164b lim: 20 exec/s: 44 rss: 75Mb L: 4/16 MS: 1 CrossOver- 00:07:42.812 #45 NEW cov: 12675 ft: 15257 corp: 29/169b lim: 20 exec/s: 45 rss: 75Mb L: 5/16 MS: 1 ShuffleBytes- 00:07:42.812 #46 NEW cov: 12675 ft: 15264 corp: 30/173b lim: 20 exec/s: 23 rss: 75Mb L: 4/16 MS: 1 CrossOver- 00:07:42.812 #46 DONE cov: 12675 ft: 15264 corp: 30/173b lim: 20 exec/s: 23 rss: 75Mb 00:07:42.812 Done 46 runs in 2 second(s) 00:07:43.071 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:43.071 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.071 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.071 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:43.072 09:26:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:43.072 [2024-10-07 09:26:38.463564] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:43.072 [2024-10-07 09:26:38.463644] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488126 ] 00:07:43.331 [2024-10-07 09:26:38.768531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.331 [2024-10-07 09:26:38.859830] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.592 [2024-10-07 09:26:38.918952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.592 [2024-10-07 09:26:38.935153] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:43.592 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.592 INFO: Seed: 2376751266 00:07:43.592 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:43.592 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:43.592 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:43.592 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.592 #2 INITED exec/s: 0 rss: 67Mb 00:07:43.592 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.592 This may also happen if the target rejected all inputs we tried so far 00:07:43.592 [2024-10-07 09:26:38.984539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:230a7eb1 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.592 [2024-10-07 09:26:38.984568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.592 [2024-10-07 09:26:38.984625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.592 [2024-10-07 09:26:38.984639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.852 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:43.852 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.852 #12 NEW cov: 12207 ft: 12208 corp: 2/16b lim: 35 exec/s: 0 rss: 74Mb L: 15/15 MS: 5 CopyPart-InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:43.852 [2024-10-07 09:26:39.305472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80ff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.305514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.305568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.305581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.305632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.305645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.852 #17 NEW cov: 12325 ft: 13200 corp: 3/39b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 5 InsertByte-ShuffleBytes-CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:43.852 [2024-10-07 09:26:39.345472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80ff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.345498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.345551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:f8ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.345565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.345616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.345629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.852 #18 NEW cov: 12331 ft: 13312 corp: 4/62b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 ChangeBinInt- 00:07:43.852 [2024-10-07 09:26:39.405775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80ff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.405802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.405860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.405873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.405924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.405937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.852 [2024-10-07 09:26:39.405989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.852 [2024-10-07 09:26:39.406002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.133 #19 NEW cov: 12416 ft: 13873 corp: 5/96b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:44.133 [2024-10-07 09:26:39.445914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.445954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.446024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.446042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.446093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00f80000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.446106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.446157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.446171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.133 #20 NEW cov: 12416 ft: 13977 corp: 6/125b lim: 35 exec/s: 0 rss: 74Mb L: 29/34 MS: 1 InsertRepeatedBytes- 00:07:44.133 [2024-10-07 09:26:39.506082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b1230a7e cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.506108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.506162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.506176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.506228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.506241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.506291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.506304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.133 #21 NEW cov: 12416 ft: 14010 corp: 7/159b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:07:44.133 [2024-10-07 09:26:39.566253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b1230a7e cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.566278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.566331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.566345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.566414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.566429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.566481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:24ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.566493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.133 #22 NEW cov: 12416 ft: 14115 corp: 8/193b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeByte- 00:07:44.133 [2024-10-07 09:26:39.626403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b1230a7e cdw11:32ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.626432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.626485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.626498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.626550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.626563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.626614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.626628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.133 #23 NEW cov: 12416 ft: 14214 corp: 9/227b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeByte- 00:07:44.133 [2024-10-07 09:26:39.666633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.666659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.666712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.666725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.666777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.666790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.666844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.666857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.133 [2024-10-07 09:26:39.666908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.133 [2024-10-07 09:26:39.666921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.393 #24 NEW cov: 12416 ft: 14303 corp: 10/262b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:07:44.393 [2024-10-07 09:26:39.726674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.726700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.393 [2024-10-07 09:26:39.726754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.726768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.393 [2024-10-07 09:26:39.726825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00f80000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.726842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.393 [2024-10-07 09:26:39.726891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fff7ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.726904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.393 #25 NEW cov: 12416 ft: 14328 corp: 11/291b lim: 35 exec/s: 0 rss: 75Mb L: 29/35 MS: 1 ChangeBit- 00:07:44.393 [2024-10-07 09:26:39.786829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.786854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.393 [2024-10-07 09:26:39.786922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.786936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.393 [2024-10-07 09:26:39.786989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00f80000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.393 [2024-10-07 09:26:39.787002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.787055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fff7ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.787068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.394 #26 NEW cov: 12416 ft: 14350 corp: 12/320b lim: 35 exec/s: 0 rss: 75Mb L: 29/35 MS: 1 CrossOver- 00:07:44.394 [2024-10-07 09:26:39.826969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.826996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.827051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.827065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.827119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00f80000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.827131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.827185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fff7ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.827199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.394 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:44.394 #27 NEW cov: 12439 ft: 14401 corp: 13/349b lim: 35 exec/s: 0 rss: 75Mb L: 29/35 MS: 1 ChangeByte- 00:07:44.394 [2024-10-07 09:26:39.887274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:327e0a32 cdw11:b1230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.887299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.887366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.887384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.887437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.887450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.887500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.887513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.887565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.887578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.394 #28 NEW cov: 12439 ft: 14429 corp: 14/384b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:44.394 [2024-10-07 09:26:39.947450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.947475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.947529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.947542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.947594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.947607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.947660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.947673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.394 [2024-10-07 09:26:39.947725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.394 [2024-10-07 09:26:39.947738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.654 #29 NEW cov: 12439 ft: 14473 corp: 15/419b lim: 35 exec/s: 29 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:44.654 [2024-10-07 09:26:39.987536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:39.987561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:39.987614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:39.987628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:39.987682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff040000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:39.987698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:39.987748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00250003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:39.987761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:39.987811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:39.987828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.654 #30 NEW cov: 12439 ft: 14496 corp: 16/454b lim: 35 exec/s: 30 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:44.654 [2024-10-07 09:26:40.027509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80ff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:40.027535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:40.027590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:40.027604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:40.027659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:40.027673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:40.027725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:40.027738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.654 #31 NEW cov: 12439 ft: 14504 corp: 17/483b lim: 35 exec/s: 31 rss: 75Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:07:44.654 [2024-10-07 09:26:40.067820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:40.067846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.654 [2024-10-07 09:26:40.067902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.654 [2024-10-07 09:26:40.067916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.067969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.067983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.068034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.068047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.068101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.068114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.655 #32 NEW cov: 12439 ft: 14524 corp: 18/518b lim: 35 exec/s: 32 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:44.655 [2024-10-07 09:26:40.107743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:16162d16 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.107770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.107842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.107857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.107910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.107924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.107979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.107992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.655 #37 NEW cov: 12439 ft: 14538 corp: 19/547b lim: 35 exec/s: 37 rss: 75Mb L: 29/35 MS: 5 ShuffleBytes-CopyPart-EraseBytes-ChangeByte-InsertRepeatedBytes- 00:07:44.655 [2024-10-07 09:26:40.148067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.148091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.148145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.148159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.148210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.148223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.148275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff230a cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.148289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.148341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.148354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.655 #38 NEW cov: 12439 ft: 14547 corp: 20/582b lim: 35 exec/s: 38 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:44.655 [2024-10-07 09:26:40.188126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:23f60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.188151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.188222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.188236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.188305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.188318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.188370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.188383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.655 [2024-10-07 09:26:40.188435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.655 [2024-10-07 09:26:40.188448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.918 #39 NEW cov: 12439 ft: 14596 corp: 21/617b lim: 35 exec/s: 39 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:44.918 [2024-10-07 09:26:40.248199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80ff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.248224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.248280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.248295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.248350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.248363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.248417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.248430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.918 #40 NEW cov: 12439 ft: 14609 corp: 22/651b lim: 35 exec/s: 40 rss: 75Mb L: 34/35 MS: 1 ShuffleBytes- 00:07:44.918 [2024-10-07 09:26:40.288277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:160a0a16 cdw11:80ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.288301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.288371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.288386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.288440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.288453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.288506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.288518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.918 #41 NEW cov: 12439 ft: 14647 corp: 23/682b lim: 35 exec/s: 41 rss: 75Mb L: 31/35 MS: 1 CrossOver- 00:07:44.918 [2024-10-07 09:26:40.348439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80ff0a0a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.348463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.348533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.348547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.348599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.348613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.348665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffbf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.348678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.918 #42 NEW cov: 12439 ft: 14656 corp: 24/716b lim: 35 exec/s: 42 rss: 75Mb L: 34/35 MS: 1 ChangeBit- 00:07:44.918 [2024-10-07 09:26:40.388709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.388733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.388804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fbffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.388822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.388875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.388890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.388944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff230a cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.388956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.389010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.389024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.918 #43 NEW cov: 12439 ft: 14676 corp: 25/751b lim: 35 exec/s: 43 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:44.918 [2024-10-07 09:26:40.448879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:327e0a32 cdw11:b1230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.448904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.448973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.448988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.449042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:327e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.449056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.449107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff230a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.449120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.918 [2024-10-07 09:26:40.449173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.918 [2024-10-07 09:26:40.449186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.179 #44 NEW cov: 12439 ft: 14715 corp: 26/786b lim: 35 exec/s: 44 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:07:45.179 [2024-10-07 09:26:40.509043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7ab10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.509080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.509152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.509166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.509218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff040000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.509232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.509284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00250003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.509296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.509350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.509364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.179 #45 NEW cov: 12439 ft: 14744 corp: 27/821b lim: 35 exec/s: 45 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:07:45.179 [2024-10-07 09:26:40.569095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:b1230a7e cdw11:0aff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.569120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.569192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.569206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.569262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.569274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.569325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff24ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.569342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.179 #46 NEW cov: 12439 ft: 14761 corp: 28/854b lim: 35 exec/s: 46 rss: 75Mb L: 33/35 MS: 1 EraseBytes- 00:07:45.179 [2024-10-07 09:26:40.609291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7ab10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.609314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.609369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:96ffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.609383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.609436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff040000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.609449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.609501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00250003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.609514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.179 [2024-10-07 09:26:40.609567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.179 [2024-10-07 09:26:40.609580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.179 #47 NEW cov: 12439 ft: 14782 corp: 29/889b lim: 35 exec/s: 47 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:07:45.180 [2024-10-07 09:26:40.669480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7eb10a32 cdw11:230a0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.669504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.669559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.669573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.669643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.669657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.669711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff240003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.669724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.669779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:feffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.669791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.180 #48 NEW cov: 12439 ft: 14818 corp: 30/924b lim: 35 exec/s: 48 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:45.180 [2024-10-07 09:26:40.709424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:327e0a32 cdw11:b1230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.709457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.709530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.709545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.709598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:327e0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.709612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.180 [2024-10-07 09:26:40.709664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff230a cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.180 [2024-10-07 09:26:40.709677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.440 #49 NEW cov: 12439 ft: 14836 corp: 31/958b lim: 35 exec/s: 49 rss: 76Mb L: 34/35 MS: 1 EraseBytes- 00:07:45.440 [2024-10-07 09:26:40.769622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.769647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.769702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.769715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.769766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00f80000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.769779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.769835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.769849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.440 #50 NEW cov: 12439 ft: 14852 corp: 32/986b lim: 35 exec/s: 50 rss: 76Mb L: 28/35 MS: 1 EraseBytes- 00:07:45.440 [2024-10-07 09:26:40.809738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.809764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.809825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.809838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.809892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:08080000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.809906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.809960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fff7ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.809974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.440 #51 NEW cov: 12439 ft: 14871 corp: 33/1015b lim: 35 exec/s: 51 rss: 76Mb L: 29/35 MS: 1 ChangeBinInt- 00:07:45.440 [2024-10-07 09:26:40.869848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a80 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.869874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.869944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.869959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.870014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f8ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.870027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.870093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.870106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.440 #52 NEW cov: 12439 ft: 14910 corp: 34/1043b lim: 35 exec/s: 52 rss: 76Mb L: 28/35 MS: 1 CopyPart- 00:07:45.440 [2024-10-07 09:26:40.929706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:327e0a32 cdw11:b1230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.929733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.440 [2024-10-07 09:26:40.929787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.440 [2024-10-07 09:26:40.929800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.440 #53 NEW cov: 12439 ft: 14925 corp: 35/1062b lim: 35 exec/s: 53 rss: 76Mb L: 19/35 MS: 1 EraseBytes- 00:07:45.440 [2024-10-07 09:26:40.970015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:80000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.441 [2024-10-07 09:26:40.970042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.441 [2024-10-07 09:26:40.970110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.441 [2024-10-07 09:26:40.970124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.441 [2024-10-07 09:26:40.970180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f7ff00ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:45.441 [2024-10-07 09:26:40.970193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.700 #54 NEW cov: 12439 ft: 14964 corp: 36/1083b lim: 35 exec/s: 27 rss: 76Mb L: 21/35 MS: 1 EraseBytes- 00:07:45.700 #54 DONE cov: 12439 ft: 14964 corp: 36/1083b lim: 35 exec/s: 27 rss: 76Mb 00:07:45.700 Done 54 runs in 2 second(s) 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:45.700 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:45.701 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.701 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.701 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:45.701 09:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:45.701 [2024-10-07 09:26:41.201621] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:45.701 [2024-10-07 09:26:41.201702] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488492 ] 00:07:45.960 [2024-10-07 09:26:41.510937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.220 [2024-10-07 09:26:41.604176] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.220 [2024-10-07 09:26:41.663108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.220 [2024-10-07 09:26:41.679319] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:46.220 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.220 INFO: Seed: 826777187 00:07:46.220 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:46.220 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:46.220 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:46.220 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.220 #2 INITED exec/s: 0 rss: 67Mb 00:07:46.220 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.220 This may also happen if the target rejected all inputs we tried so far 00:07:46.220 [2024-10-07 09:26:41.724735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.220 [2024-10-07 09:26:41.724764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:46.789 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:46.789 #4 NEW cov: 12223 ft: 12219 corp: 2/17b lim: 45 exec/s: 0 rss: 74Mb L: 16/16 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:46.789 [2024-10-07 09:26:42.065615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f474f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.065650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 #5 NEW cov: 12336 ft: 12767 corp: 3/33b lim: 45 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 ChangeByte- 00:07:46.789 [2024-10-07 09:26:42.125726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.125755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 #13 NEW cov: 12342 ft: 13095 corp: 4/45b lim: 45 exec/s: 0 rss: 74Mb L: 12/16 MS: 3 ChangeBit-InsertByte-CrossOver- 00:07:46.789 [2024-10-07 09:26:42.165819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.165847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 #14 NEW cov: 12427 ft: 13363 corp: 5/57b lim: 45 exec/s: 0 rss: 74Mb L: 12/16 MS: 1 ChangeBinInt- 00:07:46.789 [2024-10-07 09:26:42.225967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:473a4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.225992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 #15 NEW cov: 12427 ft: 13490 corp: 6/69b lim: 45 exec/s: 0 rss: 74Mb L: 12/16 MS: 1 ChangeByte- 00:07:46.789 [2024-10-07 09:26:42.266099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.266125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 #16 NEW cov: 12427 ft: 13612 corp: 7/82b lim: 45 exec/s: 0 rss: 74Mb L: 13/16 MS: 1 InsertByte- 00:07:46.789 [2024-10-07 09:26:42.306369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:473a4f4f cdw11:4f470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.306394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.789 [2024-10-07 09:26:42.306448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.789 [2024-10-07 09:26:42.306463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.789 #17 NEW cov: 12427 ft: 14425 corp: 8/103b lim: 45 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 CopyPart- 00:07:47.048 [2024-10-07 09:26:42.366378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:6f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.366405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 #18 NEW cov: 12427 ft: 14501 corp: 9/119b lim: 45 exec/s: 0 rss: 74Mb L: 16/21 MS: 1 ChangeBit- 00:07:47.048 [2024-10-07 09:26:42.407134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.407160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.407215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.407229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.407289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.407302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.407357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.407370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.407423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.407435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.048 #19 NEW cov: 12427 ft: 14977 corp: 10/164b lim: 45 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:07:47.048 [2024-10-07 09:26:42.446749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.446774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.446830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.446844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.048 #20 NEW cov: 12427 ft: 15080 corp: 11/185b lim: 45 exec/s: 0 rss: 74Mb L: 21/45 MS: 1 CopyPart- 00:07:47.048 [2024-10-07 09:26:42.486739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.486764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 #21 NEW cov: 12427 ft: 15115 corp: 12/194b lim: 45 exec/s: 0 rss: 74Mb L: 9/45 MS: 1 EraseBytes- 00:07:47.048 [2024-10-07 09:26:42.527476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.527500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.527557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.527570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.527622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.527635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.527689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.527702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.527756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.527768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.048 #22 NEW cov: 12427 ft: 15141 corp: 13/239b lim: 45 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 CopyPart- 00:07:47.048 [2024-10-07 09:26:42.587160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:243a4f4f cdw11:4f470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.587185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 [2024-10-07 09:26:42.587239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-10-07 09:26:42.587253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.307 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:47.307 #23 NEW cov: 12450 ft: 15223 corp: 14/260b lim: 45 exec/s: 0 rss: 74Mb L: 21/45 MS: 1 ChangeByte- 00:07:47.307 [2024-10-07 09:26:42.647335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:243a4f4f cdw11:4f470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.647359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.307 [2024-10-07 09:26:42.647429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.647444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.307 #24 NEW cov: 12450 ft: 15244 corp: 15/281b lim: 45 exec/s: 0 rss: 74Mb L: 21/45 MS: 1 ChangeBit- 00:07:47.307 [2024-10-07 09:26:42.707355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.707380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.307 #25 NEW cov: 12450 ft: 15305 corp: 16/296b lim: 45 exec/s: 25 rss: 74Mb L: 15/45 MS: 1 CrossOver- 00:07:47.307 [2024-10-07 09:26:42.767548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.767572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.307 #26 NEW cov: 12450 ft: 15324 corp: 17/308b lim: 45 exec/s: 26 rss: 75Mb L: 12/45 MS: 1 ShuffleBytes- 00:07:47.307 [2024-10-07 09:26:42.807807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.807837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.307 [2024-10-07 09:26:42.807892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5e0e4f4f cdw11:0e4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.807906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.307 #27 NEW cov: 12450 ft: 15368 corp: 18/333b lim: 45 exec/s: 27 rss: 75Mb L: 25/45 MS: 1 CrossOver- 00:07:47.307 [2024-10-07 09:26:42.868167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.868193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.307 [2024-10-07 09:26:42.868249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f424f4f cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.868267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.307 [2024-10-07 09:26:42.868322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.307 [2024-10-07 09:26:42.868336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.567 #28 NEW cov: 12450 ft: 15605 corp: 19/366b lim: 45 exec/s: 28 rss: 75Mb L: 33/45 MS: 1 CrossOver- 00:07:47.567 [2024-10-07 09:26:42.908281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:42.908305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.567 [2024-10-07 09:26:42.908360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:42.908374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.567 [2024-10-07 09:26:42.908442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:42.908456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.567 #29 NEW cov: 12450 ft: 15615 corp: 20/401b lim: 45 exec/s: 29 rss: 75Mb L: 35/45 MS: 1 CrossOver- 00:07:47.567 [2024-10-07 09:26:42.948195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:243a4f4f cdw11:4f470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:42.948219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.567 [2024-10-07 09:26:42.948290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4fff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:42.948304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.567 #30 NEW cov: 12450 ft: 15653 corp: 21/425b lim: 45 exec/s: 30 rss: 75Mb L: 24/45 MS: 1 InsertRepeatedBytes- 00:07:47.567 [2024-10-07 09:26:42.988157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:42.988181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.567 #31 NEW cov: 12450 ft: 15692 corp: 22/442b lim: 45 exec/s: 31 rss: 75Mb L: 17/45 MS: 1 CopyPart- 00:07:47.567 [2024-10-07 09:26:43.048355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:43.048381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.567 #32 NEW cov: 12450 ft: 15708 corp: 23/455b lim: 45 exec/s: 32 rss: 75Mb L: 13/45 MS: 1 InsertByte- 00:07:47.567 [2024-10-07 09:26:43.108555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.567 [2024-10-07 09:26:43.108579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.827 #33 NEW cov: 12450 ft: 15714 corp: 24/468b lim: 45 exec/s: 33 rss: 75Mb L: 13/45 MS: 1 ChangeBit- 00:07:47.827 [2024-10-07 09:26:43.148944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:473a4f4f cdw11:4e4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.148968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.827 [2024-10-07 09:26:43.149030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.149044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.827 [2024-10-07 09:26:43.149098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4e4f473a cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.149110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.827 #34 NEW cov: 12450 ft: 15728 corp: 25/502b lim: 45 exec/s: 34 rss: 75Mb L: 34/45 MS: 1 CopyPart- 00:07:47.827 [2024-10-07 09:26:43.208990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.209014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.827 [2024-10-07 09:26:43.209067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f474f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.209081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.827 #35 NEW cov: 12450 ft: 15739 corp: 26/523b lim: 45 exec/s: 35 rss: 75Mb L: 21/45 MS: 1 CopyPart- 00:07:47.827 [2024-10-07 09:26:43.249076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.249101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.827 [2024-10-07 09:26:43.249155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.249169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.827 #36 NEW cov: 12450 ft: 15750 corp: 27/544b lim: 45 exec/s: 36 rss: 75Mb L: 21/45 MS: 1 CopyPart- 00:07:47.827 [2024-10-07 09:26:43.309242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:473a4f4f cdw11:4f470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.309267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.827 [2024-10-07 09:26:43.309322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.309336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.827 #37 NEW cov: 12450 ft: 15788 corp: 28/564b lim: 45 exec/s: 37 rss: 75Mb L: 20/45 MS: 1 EraseBytes- 00:07:47.827 [2024-10-07 09:26:43.349172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.827 [2024-10-07 09:26:43.349197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.087 #38 NEW cov: 12450 ft: 15817 corp: 29/577b lim: 45 exec/s: 38 rss: 75Mb L: 13/45 MS: 1 CopyPart- 00:07:48.087 [2024-10-07 09:26:43.410052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.410076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.410132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.410148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.410201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4242c2bd cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.410214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.410265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.410278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.410332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.410345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.087 #39 NEW cov: 12450 ft: 15832 corp: 30/622b lim: 45 exec/s: 39 rss: 75Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:48.087 [2024-10-07 09:26:43.450100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.450124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.450178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.450207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.450261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.450274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.450329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:42424242 cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.450342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.450396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.450410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.087 #40 NEW cov: 12450 ft: 15834 corp: 31/667b lim: 45 exec/s: 40 rss: 75Mb L: 45/45 MS: 1 ShuffleBytes- 00:07:48.087 [2024-10-07 09:26:43.489641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.489666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.087 #41 NEW cov: 12450 ft: 15860 corp: 32/680b lim: 45 exec/s: 41 rss: 75Mb L: 13/45 MS: 1 ShuffleBytes- 00:07:48.087 [2024-10-07 09:26:43.549943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:474f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.549970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.087 [2024-10-07 09:26:43.550025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f4f474f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.550042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.087 #42 NEW cov: 12450 ft: 15881 corp: 33/701b lim: 45 exec/s: 42 rss: 75Mb L: 21/45 MS: 1 ChangeBit- 00:07:48.087 [2024-10-07 09:26:43.609965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f474f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.087 [2024-10-07 09:26:43.609992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.346 #43 NEW cov: 12450 ft: 15894 corp: 34/717b lim: 45 exec/s: 43 rss: 75Mb L: 16/45 MS: 1 ChangeByte- 00:07:48.346 [2024-10-07 09:26:43.670426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.346 [2024-10-07 09:26:43.670454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.346 [2024-10-07 09:26:43.670509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4f424f4f cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.346 [2024-10-07 09:26:43.670522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.346 [2024-10-07 09:26:43.670575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:42424f4f cdw11:42420002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.346 [2024-10-07 09:26:43.670587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.346 #44 NEW cov: 12450 ft: 15898 corp: 35/749b lim: 45 exec/s: 22 rss: 75Mb L: 32/45 MS: 1 CrossOver- 00:07:48.346 #44 DONE cov: 12450 ft: 15898 corp: 35/749b lim: 45 exec/s: 22 rss: 75Mb 00:07:48.346 Done 44 runs in 2 second(s) 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:48.346 09:26:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:48.346 [2024-10-07 09:26:43.889300] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:48.346 [2024-10-07 09:26:43.889398] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488846 ] 00:07:48.913 [2024-10-07 09:26:44.200181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.913 [2024-10-07 09:26:44.294422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.913 [2024-10-07 09:26:44.353280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.913 [2024-10-07 09:26:44.369488] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:48.913 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.913 INFO: Seed: 3517777620 00:07:48.913 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:48.913 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:48.913 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:48.913 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.913 #2 INITED exec/s: 0 rss: 67Mb 00:07:48.913 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.913 This may also happen if the target rejected all inputs we tried so far 00:07:48.913 [2024-10-07 09:26:44.425265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:48.913 [2024-10-07 09:26:44.425294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.913 [2024-10-07 09:26:44.425350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.913 [2024-10-07 09:26:44.425364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.913 [2024-10-07 09:26:44.425417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.913 [2024-10-07 09:26:44.425430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.172 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:49.172 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.172 #5 NEW cov: 12136 ft: 12132 corp: 2/8b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:49.430 [2024-10-07 09:26:44.745914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 00:07:49.430 [2024-10-07 09:26:44.745949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.430 [2024-10-07 09:26:44.746002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.430 [2024-10-07 09:26:44.746016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.430 [2024-10-07 09:26:44.746066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.430 [2024-10-07 09:26:44.746079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.431 #6 NEW cov: 12253 ft: 12783 corp: 3/15b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeBit- 00:07:49.431 [2024-10-07 09:26:44.805895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000acac cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.805922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.805989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000acac cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.806003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.431 #7 NEW cov: 12259 ft: 13191 corp: 4/20b lim: 10 exec/s: 0 rss: 74Mb L: 5/7 MS: 1 InsertRepeatedBytes- 00:07:49.431 [2024-10-07 09:26:44.846091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.846117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.846169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004a00 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.846182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.846234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.846248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.431 #8 NEW cov: 12344 ft: 13438 corp: 5/27b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ShuffleBytes- 00:07:49.431 [2024-10-07 09:26:44.906217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.906243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.906309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.906323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.906376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.906389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.431 #9 NEW cov: 12344 ft: 13598 corp: 6/34b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeBinInt- 00:07:49.431 [2024-10-07 09:26:44.946370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.946395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.946448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b6ff cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.946461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.431 [2024-10-07 09:26:44.946512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:49.431 [2024-10-07 09:26:44.946525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.431 #10 NEW cov: 12344 ft: 13738 corp: 7/41b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeBinInt- 00:07:49.690 [2024-10-07 09:26:45.006441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:49.690 [2024-10-07 09:26:45.006469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.690 [2024-10-07 09:26:45.006526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.006540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.691 #11 NEW cov: 12344 ft: 13794 corp: 8/45b lim: 10 exec/s: 0 rss: 74Mb L: 4/7 MS: 1 EraseBytes- 00:07:49.691 [2024-10-07 09:26:45.066577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000401 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.066604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.066655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.066670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.691 #12 NEW cov: 12344 ft: 13850 corp: 9/49b lim: 10 exec/s: 0 rss: 74Mb L: 4/7 MS: 1 ChangeBinInt- 00:07:49.691 [2024-10-07 09:26:45.126737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000acac cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.126762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.126818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ac0a cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.126831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.691 #13 NEW cov: 12344 ft: 13899 corp: 10/53b lim: 10 exec/s: 0 rss: 74Mb L: 4/7 MS: 1 EraseBytes- 00:07:49.691 [2024-10-07 09:26:45.187103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000078b4 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.187128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.187182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fdd6 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.187195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.187246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000007f5 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.187258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.187309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002500 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.187323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.691 #14 NEW cov: 12344 ft: 14134 corp: 11/62b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "x\264\375\326\007\365%\000"- 00:07:49.691 [2024-10-07 09:26:45.227113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.227137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.227189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.227203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.691 [2024-10-07 09:26:45.227254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.691 [2024-10-07 09:26:45.227267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.691 #15 NEW cov: 12344 ft: 14152 corp: 12/69b lim: 10 exec/s: 0 rss: 74Mb L: 7/9 MS: 1 ShuffleBytes- 00:07:49.950 [2024-10-07 09:26:45.267333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000500 cdw11:00000000 00:07:49.950 [2024-10-07 09:26:45.267359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.950 [2024-10-07 09:26:45.267411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.950 [2024-10-07 09:26:45.267424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.950 [2024-10-07 09:26:45.267474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000b6ff cdw11:00000000 00:07:49.950 [2024-10-07 09:26:45.267504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.950 [2024-10-07 09:26:45.267556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.267569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.951 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:49.951 #16 NEW cov: 12367 ft: 14178 corp: 13/78b lim: 10 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\005\000"- 00:07:49.951 [2024-10-07 09:26:45.327171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.327196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.951 #17 NEW cov: 12367 ft: 14384 corp: 14/80b lim: 10 exec/s: 0 rss: 74Mb L: 2/9 MS: 1 EraseBytes- 00:07:49.951 [2024-10-07 09:26:45.367518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.367542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.951 [2024-10-07 09:26:45.367593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.367606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.951 [2024-10-07 09:26:45.367658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000401 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.367672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.951 #18 NEW cov: 12367 ft: 14413 corp: 15/87b lim: 10 exec/s: 18 rss: 74Mb L: 7/9 MS: 1 CrossOver- 00:07:49.951 [2024-10-07 09:26:45.427696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.427721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.951 [2024-10-07 09:26:45.427772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.427786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.951 [2024-10-07 09:26:45.427842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.427856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.951 #19 NEW cov: 12367 ft: 14488 corp: 16/94b lim: 10 exec/s: 19 rss: 75Mb L: 7/9 MS: 1 CopyPart- 00:07:49.951 [2024-10-07 09:26:45.467680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.467708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.951 [2024-10-07 09:26:45.467761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:07:49.951 [2024-10-07 09:26:45.467774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.951 #20 NEW cov: 12367 ft: 14548 corp: 17/98b lim: 10 exec/s: 20 rss: 75Mb L: 4/9 MS: 1 ShuffleBytes- 00:07:50.211 [2024-10-07 09:26:45.528213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000078b4 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.528238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.528292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b4fd cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.528306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.528372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d607 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.528385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.528438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f525 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.528451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.528501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.528515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.211 #21 NEW cov: 12367 ft: 14600 corp: 18/108b lim: 10 exec/s: 21 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:07:50.211 [2024-10-07 09:26:45.588255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000078b4 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.588279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.588332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b4fd cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.588345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.588398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d6f5 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.588411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.588460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002500 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.588473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.211 #22 NEW cov: 12367 ft: 14616 corp: 19/117b lim: 10 exec/s: 22 rss: 75Mb L: 9/10 MS: 1 EraseBytes- 00:07:50.211 [2024-10-07 09:26:45.648055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.648079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.211 #23 NEW cov: 12367 ft: 14648 corp: 20/119b lim: 10 exec/s: 23 rss: 75Mb L: 2/10 MS: 1 EraseBytes- 00:07:50.211 [2024-10-07 09:26:45.708505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.708533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.708584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.708597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.211 [2024-10-07 09:26:45.708649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.708661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.211 #24 NEW cov: 12367 ft: 14664 corp: 21/126b lim: 10 exec/s: 24 rss: 75Mb L: 7/10 MS: 1 ShuffleBytes- 00:07:50.211 [2024-10-07 09:26:45.748325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.211 [2024-10-07 09:26:45.748349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 #28 NEW cov: 12367 ft: 14680 corp: 22/128b lim: 10 exec/s: 28 rss: 75Mb L: 2/10 MS: 4 EraseBytes-ShuffleBytes-CrossOver-CopyPart- 00:07:50.471 [2024-10-07 09:26:45.808546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000401 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.808571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 #29 NEW cov: 12367 ft: 14684 corp: 23/131b lim: 10 exec/s: 29 rss: 75Mb L: 3/10 MS: 1 EraseBytes- 00:07:50.471 [2024-10-07 09:26:45.848965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000078b4 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.848990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.849045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fd2b cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.849058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.849109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000007f5 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.849122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.849171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002500 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.849184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.471 #30 NEW cov: 12367 ft: 14697 corp: 24/140b lim: 10 exec/s: 30 rss: 75Mb L: 9/10 MS: 1 ChangeByte- 00:07:50.471 [2024-10-07 09:26:45.889053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000310a cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.889077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.889129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.889143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.889194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.889207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.889260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.889273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.471 #31 NEW cov: 12367 ft: 14744 corp: 25/148b lim: 10 exec/s: 31 rss: 75Mb L: 8/10 MS: 1 InsertByte- 00:07:50.471 [2024-10-07 09:26:45.929064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.929089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.929141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.929155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:45.929206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.929220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.471 #32 NEW cov: 12367 ft: 14751 corp: 26/155b lim: 10 exec/s: 32 rss: 75Mb L: 7/10 MS: 1 ShuffleBytes- 00:07:50.471 [2024-10-07 09:26:45.968959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000acac cdw11:00000000 00:07:50.471 [2024-10-07 09:26:45.968983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 #33 NEW cov: 12367 ft: 14771 corp: 27/158b lim: 10 exec/s: 33 rss: 75Mb L: 3/10 MS: 1 EraseBytes- 00:07:50.471 [2024-10-07 09:26:46.029285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:46.029309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.471 [2024-10-07 09:26:46.029362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:07:50.471 [2024-10-07 09:26:46.029376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.731 #34 NEW cov: 12367 ft: 14792 corp: 28/162b lim: 10 exec/s: 34 rss: 75Mb L: 4/10 MS: 1 CopyPart- 00:07:50.731 [2024-10-07 09:26:46.069604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.731 [2024-10-07 09:26:46.069629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.731 [2024-10-07 09:26:46.069698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004a00 cdw11:00000000 00:07:50.731 [2024-10-07 09:26:46.069712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.731 [2024-10-07 09:26:46.069766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 00:07:50.731 [2024-10-07 09:26:46.069780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.731 [2024-10-07 09:26:46.069836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.731 [2024-10-07 09:26:46.069850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.732 #35 NEW cov: 12367 ft: 14826 corp: 29/171b lim: 10 exec/s: 35 rss: 75Mb L: 9/10 MS: 1 PersAutoDict- DE: "\005\000"- 00:07:50.732 [2024-10-07 09:26:46.109625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.109650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.109723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.109736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.109788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c600 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.109801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.732 #36 NEW cov: 12367 ft: 14835 corp: 30/178b lim: 10 exec/s: 36 rss: 75Mb L: 7/10 MS: 1 ChangeByte- 00:07:50.732 [2024-10-07 09:26:46.149577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000acac cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.149601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.149668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ac0a cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.149682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.732 #37 NEW cov: 12367 ft: 14849 corp: 31/182b lim: 10 exec/s: 37 rss: 75Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:50.732 [2024-10-07 09:26:46.190002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.190029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.190099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b6ff cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.190113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.190169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000500 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.190182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.190232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff00 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.190245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.732 #38 NEW cov: 12367 ft: 14854 corp: 32/191b lim: 10 exec/s: 38 rss: 75Mb L: 9/10 MS: 1 PersAutoDict- DE: "\005\000"- 00:07:50.732 [2024-10-07 09:26:46.230192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006578 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.230217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.230269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b4fd cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.230283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.230334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d607 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.230347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.230399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f525 cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.230412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.732 [2024-10-07 09:26:46.230468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.230481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:50.732 #39 NEW cov: 12367 ft: 14858 corp: 33/201b lim: 10 exec/s: 39 rss: 75Mb L: 10/10 MS: 1 InsertByte- 00:07:50.732 [2024-10-07 09:26:46.269832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007d7e cdw11:00000000 00:07:50.732 [2024-10-07 09:26:46.269857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.732 #43 NEW cov: 12367 ft: 14867 corp: 34/203b lim: 10 exec/s: 43 rss: 75Mb L: 2/10 MS: 4 CrossOver-ChangeBit-ShuffleBytes-InsertByte- 00:07:50.992 [2024-10-07 09:26:46.310068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000acac cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.310095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.992 [2024-10-07 09:26:46.310164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b10a cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.310178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.992 #44 NEW cov: 12367 ft: 14907 corp: 35/207b lim: 10 exec/s: 44 rss: 75Mb L: 4/10 MS: 1 ChangeBinInt- 00:07:50.992 [2024-10-07 09:26:46.350280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000acac cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.350306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.992 [2024-10-07 09:26:46.350374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ac26 cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.350387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.992 [2024-10-07 09:26:46.350439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ac0a cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.350452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.992 #45 NEW cov: 12367 ft: 14912 corp: 36/213b lim: 10 exec/s: 45 rss: 75Mb L: 6/10 MS: 1 InsertByte- 00:07:50.992 [2024-10-07 09:26:46.390529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000bb00 cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.390554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.992 [2024-10-07 09:26:46.390606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.390619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.992 [2024-10-07 09:26:46.390670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.390683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.992 [2024-10-07 09:26:46.390733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:50.992 [2024-10-07 09:26:46.390746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.992 #46 NEW cov: 12367 ft: 14920 corp: 37/221b lim: 10 exec/s: 23 rss: 75Mb L: 8/10 MS: 1 InsertByte- 00:07:50.992 #46 DONE cov: 12367 ft: 14920 corp: 37/221b lim: 10 exec/s: 23 rss: 75Mb 00:07:50.992 ###### Recommended dictionary. ###### 00:07:50.992 "x\264\375\326\007\365%\000" # Uses: 0 00:07:50.992 "\005\000" # Uses: 2 00:07:50.992 ###### End of recommended dictionary. ###### 00:07:50.992 Done 46 runs in 2 second(s) 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:51.253 09:26:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:51.253 [2024-10-07 09:26:46.627549] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:51.253 [2024-10-07 09:26:46.627633] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489208 ] 00:07:51.513 [2024-10-07 09:26:46.937825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.513 [2024-10-07 09:26:47.033152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.772 [2024-10-07 09:26:47.092073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.772 [2024-10-07 09:26:47.108277] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:51.772 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.772 INFO: Seed: 1959847531 00:07:51.772 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:51.772 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:51.772 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:51.772 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.772 #2 INITED exec/s: 0 rss: 68Mb 00:07:51.772 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.772 This may also happen if the target rejected all inputs we tried so far 00:07:51.772 [2024-10-07 09:26:47.157594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:51.772 [2024-10-07 09:26:47.157622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.772 [2024-10-07 09:26:47.157682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.772 [2024-10-07 09:26:47.157696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.031 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:52.031 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:52.031 #3 NEW cov: 12140 ft: 12117 corp: 2/6b lim: 10 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:52.031 [2024-10-07 09:26:47.478338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.478378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.031 [2024-10-07 09:26:47.478436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.478452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.031 #6 NEW cov: 12253 ft: 12817 corp: 3/10b lim: 10 exec/s: 0 rss: 75Mb L: 4/5 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:52.031 [2024-10-07 09:26:47.518572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.518600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.031 [2024-10-07 09:26:47.518653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.518667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.031 [2024-10-07 09:26:47.518718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.518731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.031 [2024-10-07 09:26:47.518784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.518797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.031 #7 NEW cov: 12259 ft: 13264 corp: 4/18b lim: 10 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:52.031 [2024-10-07 09:26:47.558324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.031 [2024-10-07 09:26:47.558350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.290 #8 NEW cov: 12344 ft: 13722 corp: 5/21b lim: 10 exec/s: 0 rss: 75Mb L: 3/8 MS: 1 EraseBytes- 00:07:52.290 [2024-10-07 09:26:47.618600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a10 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.618626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.290 [2024-10-07 09:26:47.618677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.618690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.290 #9 NEW cov: 12344 ft: 13828 corp: 6/26b lim: 10 exec/s: 0 rss: 75Mb L: 5/8 MS: 1 ChangeBit- 00:07:52.290 [2024-10-07 09:26:47.658598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.658625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.290 #10 NEW cov: 12344 ft: 13938 corp: 7/29b lim: 10 exec/s: 0 rss: 75Mb L: 3/8 MS: 1 ChangeByte- 00:07:52.290 [2024-10-07 09:26:47.719004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.719030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.290 [2024-10-07 09:26:47.719098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a10 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.719112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.290 [2024-10-07 09:26:47.719164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.719177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.290 #11 NEW cov: 12344 ft: 14154 corp: 8/36b lim: 10 exec/s: 0 rss: 75Mb L: 7/8 MS: 1 CrossOver- 00:07:52.290 [2024-10-07 09:26:47.779204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.779230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.290 [2024-10-07 09:26:47.779281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003100 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.779295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.290 [2024-10-07 09:26:47.779347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.779377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.290 #12 NEW cov: 12344 ft: 14176 corp: 9/42b lim: 10 exec/s: 0 rss: 75Mb L: 6/8 MS: 1 InsertByte- 00:07:52.290 [2024-10-07 09:26:47.819144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.819168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.290 [2024-10-07 09:26:47.819221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.290 [2024-10-07 09:26:47.819234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.290 #17 NEW cov: 12344 ft: 14274 corp: 10/46b lim: 10 exec/s: 0 rss: 75Mb L: 4/8 MS: 5 ChangeBinInt-ChangeByte-ChangeByte-ChangeBit-CrossOver- 00:07:52.549 [2024-10-07 09:26:47.859232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.859258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:47.859328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.859342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.549 #18 NEW cov: 12344 ft: 14418 corp: 11/51b lim: 10 exec/s: 0 rss: 75Mb L: 5/8 MS: 1 ChangeBinInt- 00:07:52.549 [2024-10-07 09:26:47.899479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002e0a cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.899503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:47.899576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001000 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.899591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:47.899646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.899659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.549 #19 NEW cov: 12344 ft: 14458 corp: 12/57b lim: 10 exec/s: 0 rss: 75Mb L: 6/8 MS: 1 InsertByte- 00:07:52.549 [2024-10-07 09:26:47.939593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.939617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:47.939687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003100 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.939701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:47.939754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000025 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.939768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.549 #20 NEW cov: 12344 ft: 14467 corp: 13/64b lim: 10 exec/s: 0 rss: 75Mb L: 7/8 MS: 1 InsertByte- 00:07:52.549 [2024-10-07 09:26:47.999650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.999674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:47.999741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:47.999755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.549 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:52.549 #21 NEW cov: 12367 ft: 14493 corp: 14/69b lim: 10 exec/s: 0 rss: 75Mb L: 5/8 MS: 1 EraseBytes- 00:07:52.549 [2024-10-07 09:26:48.059947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:48.059972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:48.060041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a10 cdw11:00000000 00:07:52.549 [2024-10-07 09:26:48.060055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.549 [2024-10-07 09:26:48.060107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b700 cdw11:00000000 00:07:52.550 [2024-10-07 09:26:48.060120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.550 #22 NEW cov: 12367 ft: 14518 corp: 15/76b lim: 10 exec/s: 0 rss: 75Mb L: 7/8 MS: 1 ChangeByte- 00:07:52.809 [2024-10-07 09:26:48.119994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.120020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.120090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.120104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.809 #23 NEW cov: 12367 ft: 14539 corp: 16/80b lim: 10 exec/s: 23 rss: 75Mb L: 4/8 MS: 1 InsertByte- 00:07:52.809 [2024-10-07 09:26:48.160316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.160342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.160412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.160426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.160478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.160491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.160543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.160557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.809 #24 NEW cov: 12367 ft: 14576 corp: 17/88b lim: 10 exec/s: 24 rss: 75Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:52.809 [2024-10-07 09:26:48.200218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.200244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.200296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000031 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.200310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.809 #25 NEW cov: 12367 ft: 14608 corp: 18/92b lim: 10 exec/s: 25 rss: 75Mb L: 4/8 MS: 1 InsertByte- 00:07:52.809 [2024-10-07 09:26:48.240459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a04 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.240483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.240552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.240566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.240619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.240632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.809 #26 NEW cov: 12367 ft: 14621 corp: 19/98b lim: 10 exec/s: 26 rss: 75Mb L: 6/8 MS: 1 InsertByte- 00:07:52.809 [2024-10-07 09:26:48.300389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009c00 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.300413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.809 #27 NEW cov: 12367 ft: 14633 corp: 20/101b lim: 10 exec/s: 27 rss: 75Mb L: 3/8 MS: 1 ChangeByte- 00:07:52.809 [2024-10-07 09:26:48.340865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.340890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.340945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003110 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.340962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.341029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.341042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.809 [2024-10-07 09:26:48.341096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.809 [2024-10-07 09:26:48.341109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.809 #28 NEW cov: 12367 ft: 14649 corp: 21/109b lim: 10 exec/s: 28 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:07:53.069 [2024-10-07 09:26:48.380612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000a8 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.380638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.069 #29 NEW cov: 12367 ft: 14753 corp: 22/111b lim: 10 exec/s: 29 rss: 75Mb L: 2/8 MS: 1 EraseBytes- 00:07:53.069 [2024-10-07 09:26:48.441143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.441167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.441235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.441249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.441301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.441315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.441365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.441378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.069 #30 NEW cov: 12367 ft: 14760 corp: 23/120b lim: 10 exec/s: 30 rss: 75Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:53.069 [2024-10-07 09:26:48.481155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000700a cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.481180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.481232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000700a cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.481244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.481296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.481309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.069 #31 NEW cov: 12367 ft: 14782 corp: 24/127b lim: 10 exec/s: 31 rss: 76Mb L: 7/9 MS: 1 CopyPart- 00:07:53.069 [2024-10-07 09:26:48.541319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.541344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.541396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001000 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.541413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.541464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.541477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.069 #32 NEW cov: 12367 ft: 14810 corp: 25/134b lim: 10 exec/s: 32 rss: 76Mb L: 7/9 MS: 1 EraseBytes- 00:07:53.069 [2024-10-07 09:26:48.601610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.601636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.601688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000010a5 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.601701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.601754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.601767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.069 [2024-10-07 09:26:48.601828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.069 [2024-10-07 09:26:48.601841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.368 #33 NEW cov: 12367 ft: 14828 corp: 26/142b lim: 10 exec/s: 33 rss: 76Mb L: 8/9 MS: 1 InsertByte- 00:07:53.368 [2024-10-07 09:26:48.661366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007a00 cdw11:00000000 00:07:53.368 [2024-10-07 09:26:48.661390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.368 #34 NEW cov: 12367 ft: 14885 corp: 27/145b lim: 10 exec/s: 34 rss: 76Mb L: 3/9 MS: 1 ChangeByte- 00:07:53.368 [2024-10-07 09:26:48.701510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.701535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.369 #35 NEW cov: 12367 ft: 14887 corp: 28/148b lim: 10 exec/s: 35 rss: 76Mb L: 3/9 MS: 1 ChangeBinInt- 00:07:53.369 [2024-10-07 09:26:48.741981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.742005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.742059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000a5 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.742073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.742123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.742136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.742188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.742201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.369 #36 NEW cov: 12367 ft: 14902 corp: 29/156b lim: 10 exec/s: 36 rss: 76Mb L: 8/9 MS: 1 CopyPart- 00:07:53.369 [2024-10-07 09:26:48.802138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.802163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.802233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001070 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.802247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.802302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.802315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.802366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.802380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.369 #37 NEW cov: 12367 ft: 14912 corp: 30/164b lim: 10 exec/s: 37 rss: 76Mb L: 8/9 MS: 1 ChangeByte- 00:07:53.369 [2024-10-07 09:26:48.842038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.842063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.842118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003100 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.842132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.369 #38 NEW cov: 12367 ft: 14929 corp: 31/168b lim: 10 exec/s: 38 rss: 76Mb L: 4/9 MS: 1 EraseBytes- 00:07:53.369 [2024-10-07 09:26:48.882320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dddd cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.882346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.882399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000dddd cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.882413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.882480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000dd0a cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.882495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.369 #39 NEW cov: 12367 ft: 14936 corp: 32/174b lim: 10 exec/s: 39 rss: 76Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:07:53.369 [2024-10-07 09:26:48.922606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.922632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.922701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.922715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.922768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.922782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.922836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.922853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.369 [2024-10-07 09:26:48.922908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.369 [2024-10-07 09:26:48.922920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.628 #40 NEW cov: 12367 ft: 14972 corp: 33/184b lim: 10 exec/s: 40 rss: 76Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:53.628 [2024-10-07 09:26:48.982280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:48.982304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.628 #41 NEW cov: 12367 ft: 14977 corp: 34/187b lim: 10 exec/s: 41 rss: 76Mb L: 3/10 MS: 1 ShuffleBytes- 00:07:53.628 [2024-10-07 09:26:49.022531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.022556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.022626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.022640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.628 #42 NEW cov: 12367 ft: 15029 corp: 35/192b lim: 10 exec/s: 42 rss: 76Mb L: 5/10 MS: 1 ChangeBinInt- 00:07:53.628 [2024-10-07 09:26:49.062883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.062910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.062978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.062993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.063048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.063062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.063114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.063128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.628 #43 NEW cov: 12367 ft: 15046 corp: 36/200b lim: 10 exec/s: 43 rss: 76Mb L: 8/10 MS: 1 CrossOver- 00:07:53.628 [2024-10-07 09:26:49.102769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.102793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.102866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.102881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.143006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f0a cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.143031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.143084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.143101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.628 [2024-10-07 09:26:49.143170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.628 [2024-10-07 09:26:49.143185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.628 #45 NEW cov: 12367 ft: 15050 corp: 37/206b lim: 10 exec/s: 22 rss: 76Mb L: 6/10 MS: 2 ShuffleBytes-InsertByte- 00:07:53.628 #45 DONE cov: 12367 ft: 15050 corp: 37/206b lim: 10 exec/s: 22 rss: 76Mb 00:07:53.628 Done 45 runs in 2 second(s) 00:07:53.887 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:53.887 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.887 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.887 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:53.887 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:53.888 09:26:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:53.888 [2024-10-07 09:26:49.351375] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:53.888 [2024-10-07 09:26:49.351457] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489569 ] 00:07:54.146 [2024-10-07 09:26:49.663293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.405 [2024-10-07 09:26:49.758847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.405 [2024-10-07 09:26:49.817948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.405 [2024-10-07 09:26:49.834139] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:54.405 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.405 INFO: Seed: 392843387 00:07:54.405 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:54.405 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:54.405 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:54.405 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.405 [2024-10-07 09:26:49.899666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.405 [2024-10-07 09:26:49.899695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.405 #2 INITED cov: 12150 ft: 12148 corp: 1/1b exec/s: 0 rss: 73Mb 00:07:54.405 [2024-10-07 09:26:49.940009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.405 [2024-10-07 09:26:49.940038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.405 [2024-10-07 09:26:49.940097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.405 [2024-10-07 09:26:49.940112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.405 [2024-10-07 09:26:49.940171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.405 [2024-10-07 09:26:49.940184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.665 #3 NEW cov: 12280 ft: 13296 corp: 2/4b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CMP- DE: "\000\000"- 00:07:54.665 [2024-10-07 09:26:50.000006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.000032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.000107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.000122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.665 #4 NEW cov: 12286 ft: 13795 corp: 3/6b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 InsertByte- 00:07:54.665 [2024-10-07 09:26:50.040334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.040363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.040424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.040439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.040495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.040510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.665 #5 NEW cov: 12371 ft: 13995 corp: 4/9b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 ChangeBinInt- 00:07:54.665 [2024-10-07 09:26:50.100325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.100354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.100416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.100431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.665 #6 NEW cov: 12371 ft: 14125 corp: 5/11b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 CrossOver- 00:07:54.665 [2024-10-07 09:26:50.140399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.140424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.140480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.140495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.665 #7 NEW cov: 12371 ft: 14196 corp: 6/13b lim: 5 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ChangeByte- 00:07:54.665 [2024-10-07 09:26:50.200753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.200781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.200859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.200874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.665 [2024-10-07 09:26:50.200937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.665 [2024-10-07 09:26:50.200957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.665 #8 NEW cov: 12371 ft: 14280 corp: 7/16b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 CopyPart- 00:07:54.924 [2024-10-07 09:26:50.241177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.924 [2024-10-07 09:26:50.241207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.924 [2024-10-07 09:26:50.241266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.241281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.241338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.241353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.241410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.241424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.241483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.241497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.925 #9 NEW cov: 12371 ft: 14642 corp: 8/21b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:07:54.925 [2024-10-07 09:26:50.301030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.301056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.301116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.301130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.301188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.301202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.925 #10 NEW cov: 12371 ft: 14661 corp: 9/24b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:54.925 [2024-10-07 09:26:50.340960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.340988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.341046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.341062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.925 #11 NEW cov: 12371 ft: 14711 corp: 10/26b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:54.925 [2024-10-07 09:26:50.401665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.401692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.401751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.401766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.401828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.401843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.401901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.401915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.401973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.401986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.925 #12 NEW cov: 12371 ft: 14754 corp: 11/31b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:54.925 [2024-10-07 09:26:50.441080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.441115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.925 #13 NEW cov: 12371 ft: 14812 corp: 12/32b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:07:54.925 [2024-10-07 09:26:50.481831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.481873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.481934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.481948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.482011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.482033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.482092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.482106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.925 [2024-10-07 09:26:50.482163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.925 [2024-10-07 09:26:50.482177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.185 #14 NEW cov: 12371 ft: 14849 corp: 13/37b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:55.185 [2024-10-07 09:26:50.541533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.541560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.541634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.541650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.185 #15 NEW cov: 12371 ft: 14857 corp: 14/39b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:07:55.185 [2024-10-07 09:26:50.581605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.581632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.581690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.581704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.185 #16 NEW cov: 12371 ft: 14934 corp: 15/41b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:55.185 [2024-10-07 09:26:50.642323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.642349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.642412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.642426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.642483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.642496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.642554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.642567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.642624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.642637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.185 #17 NEW cov: 12371 ft: 14974 corp: 16/46b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:55.185 [2024-10-07 09:26:50.702492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.702517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.702593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.702608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.702667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.702680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.702737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.702751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.185 [2024-10-07 09:26:50.702805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.702823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.185 #18 NEW cov: 12371 ft: 15050 corp: 17/51b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:55.185 [2024-10-07 09:26:50.741911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.185 [2024-10-07 09:26:50.741936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.704 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:55.704 #19 NEW cov: 12394 ft: 15058 corp: 18/52b lim: 5 exec/s: 19 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:55.704 [2024-10-07 09:26:51.063740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.063810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.063925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.063953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.064037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.064064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.064146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.064172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.064255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.064280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.704 #20 NEW cov: 12394 ft: 15291 corp: 19/57b lim: 5 exec/s: 20 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:07:55.704 [2024-10-07 09:26:51.133536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.133565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.133625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.133639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.133697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.133711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.133767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.133780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.133841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.133855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.704 #21 NEW cov: 12394 ft: 15321 corp: 20/62b lim: 5 exec/s: 21 rss: 75Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:55.704 [2024-10-07 09:26:51.193367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.193393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.193467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.193485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.193542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.193555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.704 #22 NEW cov: 12394 ft: 15391 corp: 21/65b lim: 5 exec/s: 22 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:07:55.704 [2024-10-07 09:26:51.253824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.253851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.253925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.253939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.253998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.254012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.704 [2024-10-07 09:26:51.254068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.704 [2024-10-07 09:26:51.254081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.705 [2024-10-07 09:26:51.254140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.705 [2024-10-07 09:26:51.254154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.964 #23 NEW cov: 12394 ft: 15422 corp: 22/70b lim: 5 exec/s: 23 rss: 75Mb L: 5/5 MS: 1 ChangeBit- 00:07:55.964 [2024-10-07 09:26:51.293951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.964 [2024-10-07 09:26:51.293977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.964 [2024-10-07 09:26:51.294034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.294048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.294106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.294119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.294176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.294189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.294245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.294262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.354114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.354140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.354199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.354213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.354276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.354296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.354373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.354389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.354448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.354464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.965 #25 NEW cov: 12394 ft: 15433 corp: 23/75b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 2 ChangeBit-CMP- DE: "\001\000\000\000"- 00:07:55.965 [2024-10-07 09:26:51.393776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.393802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.393865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.393880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.965 #26 NEW cov: 12394 ft: 15448 corp: 24/77b lim: 5 exec/s: 26 rss: 75Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:55.965 [2024-10-07 09:26:51.454288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.454314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.454369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.454383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.454436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.454449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.454505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.454522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.965 #27 NEW cov: 12394 ft: 15470 corp: 25/81b lim: 5 exec/s: 27 rss: 75Mb L: 4/5 MS: 1 InsertByte- 00:07:55.965 [2024-10-07 09:26:51.514124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.514150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.965 [2024-10-07 09:26:51.514208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.965 [2024-10-07 09:26:51.514222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.225 #28 NEW cov: 12394 ft: 15483 corp: 26/83b lim: 5 exec/s: 28 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:56.225 [2024-10-07 09:26:51.574602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.225 [2024-10-07 09:26:51.574630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.225 [2024-10-07 09:26:51.574689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.225 [2024-10-07 09:26:51.574704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.225 [2024-10-07 09:26:51.574760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.225 [2024-10-07 09:26:51.574773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.225 [2024-10-07 09:26:51.574832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.225 [2024-10-07 09:26:51.574846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.225 #29 NEW cov: 12394 ft: 15521 corp: 27/87b lim: 5 exec/s: 29 rss: 75Mb L: 4/5 MS: 1 CrossOver- 00:07:56.225 [2024-10-07 09:26:51.634442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.225 [2024-10-07 09:26:51.634468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.225 [2024-10-07 09:26:51.634527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.634541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.226 #30 NEW cov: 12394 ft: 15536 corp: 28/89b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:56.226 [2024-10-07 09:26:51.694447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.694473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.226 #31 NEW cov: 12394 ft: 15561 corp: 29/90b lim: 5 exec/s: 31 rss: 75Mb L: 1/5 MS: 1 CrossOver- 00:07:56.226 [2024-10-07 09:26:51.755281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.755308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.226 [2024-10-07 09:26:51.755370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.755384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.226 [2024-10-07 09:26:51.755441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.755454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.226 [2024-10-07 09:26:51.755508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.755521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.226 [2024-10-07 09:26:51.755575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.226 [2024-10-07 09:26:51.755589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.226 #32 NEW cov: 12394 ft: 15590 corp: 30/95b lim: 5 exec/s: 32 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:56.485 [2024-10-07 09:26:51.795362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.485 [2024-10-07 09:26:51.795389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.795450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.795463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.795520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.795533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.795586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.795599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.795652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.795665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.486 #33 NEW cov: 12394 ft: 15602 corp: 31/100b lim: 5 exec/s: 33 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:56.486 [2024-10-07 09:26:51.855367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.855392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.855449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.855465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.855524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.855538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.486 [2024-10-07 09:26:51.855594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.486 [2024-10-07 09:26:51.855607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.486 #34 NEW cov: 12394 ft: 15604 corp: 32/104b lim: 5 exec/s: 17 rss: 76Mb L: 4/5 MS: 1 InsertByte- 00:07:56.486 #34 DONE cov: 12394 ft: 15604 corp: 32/104b lim: 5 exec/s: 17 rss: 76Mb 00:07:56.486 ###### Recommended dictionary. ###### 00:07:56.486 "\000\000" # Uses: 3 00:07:56.486 "\001\000\000\000" # Uses: 0 00:07:56.486 ###### End of recommended dictionary. ###### 00:07:56.486 Done 34 runs in 2 second(s) 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:56.486 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:56.746 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:56.746 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:56.746 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.746 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:56.746 09:26:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:56.746 [2024-10-07 09:26:52.085242] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:56.746 [2024-10-07 09:26:52.085320] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489937 ] 00:07:57.005 [2024-10-07 09:26:52.401690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.005 [2024-10-07 09:26:52.493846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.005 [2024-10-07 09:26:52.553187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.265 [2024-10-07 09:26:52.569401] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:57.265 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.265 INFO: Seed: 3126853335 00:07:57.265 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:57.265 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:57.265 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:57.265 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.265 [2024-10-07 09:26:52.624965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.624994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.265 #2 INITED cov: 12168 ft: 12164 corp: 1/1b exec/s: 0 rss: 73Mb 00:07:57.265 [2024-10-07 09:26:52.665175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.665201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.665277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.665291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.265 #3 NEW cov: 12281 ft: 13530 corp: 2/3b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:07:57.265 [2024-10-07 09:26:52.725157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.725182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.265 #4 NEW cov: 12287 ft: 13703 corp: 3/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:57.265 [2024-10-07 09:26:52.765589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.765614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.765690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.765705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.765764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.765777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.265 #5 NEW cov: 12372 ft: 14088 corp: 4/7b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:07:57.265 [2024-10-07 09:26:52.826086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.826111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.826173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.826188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.826251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.826265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.826322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.826335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.265 [2024-10-07 09:26:52.826393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.265 [2024-10-07 09:26:52.826406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.525 #6 NEW cov: 12372 ft: 14471 corp: 5/12b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:07:57.525 [2024-10-07 09:26:52.885558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.525 [2024-10-07 09:26:52.885583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.525 #7 NEW cov: 12372 ft: 14509 corp: 6/13b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:57.525 [2024-10-07 09:26:52.945782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.525 [2024-10-07 09:26:52.945809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.525 #8 NEW cov: 12372 ft: 14612 corp: 7/14b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:57.525 [2024-10-07 09:26:53.005925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.525 [2024-10-07 09:26:53.005951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.525 #9 NEW cov: 12372 ft: 14758 corp: 8/15b lim: 5 exec/s: 0 rss: 74Mb L: 1/5 MS: 1 CrossOver- 00:07:57.525 [2024-10-07 09:26:53.066398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.525 [2024-10-07 09:26:53.066423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.525 [2024-10-07 09:26:53.066499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.525 [2024-10-07 09:26:53.066513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.525 [2024-10-07 09:26:53.066571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.525 [2024-10-07 09:26:53.066584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.784 #10 NEW cov: 12372 ft: 14818 corp: 9/18b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:57.784 [2024-10-07 09:26:53.106524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.784 [2024-10-07 09:26:53.106550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.784 [2024-10-07 09:26:53.106626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.784 [2024-10-07 09:26:53.106644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.784 [2024-10-07 09:26:53.106704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.784 [2024-10-07 09:26:53.106718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.784 #11 NEW cov: 12372 ft: 14848 corp: 10/21b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:07:57.784 [2024-10-07 09:26:53.166712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.784 [2024-10-07 09:26:53.166738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.166824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.166839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.166897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.166923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.785 #12 NEW cov: 12372 ft: 14918 corp: 11/24b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:07:57.785 [2024-10-07 09:26:53.207198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.207223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.207284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.207297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.207370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.207384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.207440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.207453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.207512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.207525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.785 #13 NEW cov: 12372 ft: 14954 corp: 12/29b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:07:57.785 [2024-10-07 09:26:53.267328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.267353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.267412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.267429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.267484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.267497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.267554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.267567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.267626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.267639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.785 #14 NEW cov: 12372 ft: 15046 corp: 13/34b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:57.785 [2024-10-07 09:26:53.327121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.327147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.327221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.327235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.785 [2024-10-07 09:26:53.327294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:57.785 [2024-10-07 09:26:53.327307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.044 #15 NEW cov: 12372 ft: 15131 corp: 14/37b lim: 5 exec/s: 0 rss: 74Mb L: 3/5 MS: 1 ChangeByte- 00:07:58.044 [2024-10-07 09:26:53.367088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.044 [2024-10-07 09:26:53.367112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.044 [2024-10-07 09:26:53.367187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.044 [2024-10-07 09:26:53.367202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.044 #16 NEW cov: 12372 ft: 15176 corp: 15/39b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:07:58.044 [2024-10-07 09:26:53.427742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.044 [2024-10-07 09:26:53.427767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.044 [2024-10-07 09:26:53.427847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.044 [2024-10-07 09:26:53.427862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.044 [2024-10-07 09:26:53.427926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.427939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.045 [2024-10-07 09:26:53.427997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.428011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.045 [2024-10-07 09:26:53.428069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.428083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.045 #17 NEW cov: 12372 ft: 15212 corp: 16/44b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:07:58.045 [2024-10-07 09:26:53.487765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.487789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.045 [2024-10-07 09:26:53.487867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.487882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.045 [2024-10-07 09:26:53.487939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.487953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.045 [2024-10-07 09:26:53.488011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.045 [2024-10-07 09:26:53.488024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.304 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:07:58.304 #18 NEW cov: 12395 ft: 15254 corp: 17/48b lim: 5 exec/s: 18 rss: 75Mb L: 4/5 MS: 1 CMP- DE: "\377\036"- 00:07:58.304 [2024-10-07 09:26:53.820877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.304 [2024-10-07 09:26:53.820921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.304 [2024-10-07 09:26:53.821017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.304 [2024-10-07 09:26:53.821033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.304 [2024-10-07 09:26:53.821131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.304 [2024-10-07 09:26:53.821147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.304 [2024-10-07 09:26:53.821242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.304 [2024-10-07 09:26:53.821258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.304 #19 NEW cov: 12395 ft: 15318 corp: 18/52b lim: 5 exec/s: 19 rss: 75Mb L: 4/5 MS: 1 PersAutoDict- DE: "\377\036"- 00:07:58.564 [2024-10-07 09:26:53.871374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.871403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.871505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.871521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.871613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.871628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.871724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.871739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.871839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.871870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.564 #20 NEW cov: 12395 ft: 15348 corp: 19/57b lim: 5 exec/s: 20 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:07:58.564 [2024-10-07 09:26:53.941531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.941558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.941651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.941666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.941762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.941776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.941879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.941896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:53.941989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.942006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.564 #21 NEW cov: 12395 ft: 15376 corp: 20/62b lim: 5 exec/s: 21 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:07:58.564 [2024-10-07 09:26:53.990400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:53.990432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.564 #22 NEW cov: 12395 ft: 15381 corp: 21/63b lim: 5 exec/s: 22 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:58.564 [2024-10-07 09:26:54.041020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:54.041046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:54.041159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:54.041176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.564 #23 NEW cov: 12395 ft: 15398 corp: 22/65b lim: 5 exec/s: 23 rss: 75Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:58.564 [2024-10-07 09:26:54.112067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:54.112093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:54.112188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:54.112203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.564 [2024-10-07 09:26:54.112307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.564 [2024-10-07 09:26:54.112323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.565 [2024-10-07 09:26:54.112417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.565 [2024-10-07 09:26:54.112433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.825 #24 NEW cov: 12395 ft: 15411 corp: 23/69b lim: 5 exec/s: 24 rss: 75Mb L: 4/5 MS: 1 InsertByte- 00:07:58.825 [2024-10-07 09:26:54.181134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.181161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.825 #25 NEW cov: 12395 ft: 15464 corp: 24/70b lim: 5 exec/s: 25 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:07:58.825 [2024-10-07 09:26:54.232144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.232171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.232269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.232285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.232379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.232393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.825 #26 NEW cov: 12395 ft: 15483 corp: 25/73b lim: 5 exec/s: 26 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:07:58.825 [2024-10-07 09:26:54.283309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.283336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.283443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.283461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.283563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.283579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.283672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.283689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.283788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.283804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.825 #27 NEW cov: 12395 ft: 15558 corp: 26/78b lim: 5 exec/s: 27 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:58.825 [2024-10-07 09:26:54.342795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.342829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.342955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.342973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.825 [2024-10-07 09:26:54.343066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:58.825 [2024-10-07 09:26:54.343083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.825 #28 NEW cov: 12395 ft: 15589 corp: 27/81b lim: 5 exec/s: 28 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:07:59.085 [2024-10-07 09:26:54.412848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.412877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.085 [2024-10-07 09:26:54.412976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.412994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.085 #29 NEW cov: 12395 ft: 15606 corp: 28/83b lim: 5 exec/s: 29 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:07:59.085 [2024-10-07 09:26:54.472918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.472950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.085 [2024-10-07 09:26:54.473063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.473081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.085 #30 NEW cov: 12395 ft: 15612 corp: 29/85b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:59.085 [2024-10-07 09:26:54.524179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.524205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.085 [2024-10-07 09:26:54.524313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.524330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.085 [2024-10-07 09:26:54.524424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.524440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.085 [2024-10-07 09:26:54.524534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.524551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.085 [2024-10-07 09:26:54.524655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.085 [2024-10-07 09:26:54.524671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:59.086 #31 NEW cov: 12395 ft: 15632 corp: 30/90b lim: 5 exec/s: 31 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:59.086 [2024-10-07 09:26:54.603473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.086 [2024-10-07 09:26:54.603501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.086 [2024-10-07 09:26:54.603592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:59.086 [2024-10-07 09:26:54.603610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.086 #32 pulse cov: 12395 ft: 15653 corp: 30/90b lim: 5 exec/s: 16 rss: 75Mb 00:07:59.086 #32 NEW cov: 12395 ft: 15653 corp: 31/92b lim: 5 exec/s: 16 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:07:59.086 #32 DONE cov: 12395 ft: 15653 corp: 31/92b lim: 5 exec/s: 16 rss: 75Mb 00:07:59.086 ###### Recommended dictionary. ###### 00:07:59.086 "\377\036" # Uses: 1 00:07:59.086 ###### End of recommended dictionary. ###### 00:07:59.086 Done 32 runs in 2 second(s) 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:59.346 09:26:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:59.346 [2024-10-07 09:26:54.801187] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:59.346 [2024-10-07 09:26:54.801266] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490300 ] 00:07:59.606 [2024-10-07 09:26:55.067991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.606 [2024-10-07 09:26:55.158373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.865 [2024-10-07 09:26:55.217547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.865 [2024-10-07 09:26:55.233747] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:59.865 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.865 INFO: Seed: 1494902310 00:07:59.865 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:07:59.865 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:07:59.865 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:59.865 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.865 #2 INITED exec/s: 0 rss: 66Mb 00:07:59.865 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.865 This may also happen if the target rejected all inputs we tried so far 00:07:59.865 [2024-10-07 09:26:55.282419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.865 [2024-10-07 09:26:55.282448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.865 [2024-10-07 09:26:55.282505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.865 [2024-10-07 09:26:55.282519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.125 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:00.125 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:00.125 #20 NEW cov: 12191 ft: 12187 corp: 2/23b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:08:00.125 [2024-10-07 09:26:55.623317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff03 cdw11:17ff0317 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.125 [2024-10-07 09:26:55.623355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.125 #23 NEW cov: 12304 ft: 13041 corp: 3/31b lim: 40 exec/s: 0 rss: 74Mb L: 8/22 MS: 3 ChangeByte-CMP-CopyPart- DE: "\377\377\377\003"- 00:08:00.125 [2024-10-07 09:26:55.663439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.125 [2024-10-07 09:26:55.663467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.125 [2024-10-07 09:26:55.663530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.125 [2024-10-07 09:26:55.663543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.384 #24 NEW cov: 12310 ft: 13223 corp: 4/50b lim: 40 exec/s: 0 rss: 74Mb L: 19/22 MS: 1 EraseBytes- 00:08:00.384 [2024-10-07 09:26:55.723465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff03 cdw11:17ffff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.384 [2024-10-07 09:26:55.723491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.384 #27 NEW cov: 12395 ft: 13461 corp: 5/59b lim: 40 exec/s: 0 rss: 74Mb L: 9/22 MS: 3 EraseBytes-EraseBytes-CrossOver- 00:08:00.384 [2024-10-07 09:26:55.783779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.384 [2024-10-07 09:26:55.783806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.384 [2024-10-07 09:26:55.783875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.384 [2024-10-07 09:26:55.783890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.384 #28 NEW cov: 12395 ft: 13551 corp: 6/78b lim: 40 exec/s: 0 rss: 74Mb L: 19/22 MS: 1 ShuffleBytes- 00:08:00.384 [2024-10-07 09:26:55.843751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff03 cdw11:17ffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.384 [2024-10-07 09:26:55.843777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.384 #29 NEW cov: 12395 ft: 13768 corp: 7/88b lim: 40 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 InsertByte- 00:08:00.385 [2024-10-07 09:26:55.903938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.385 [2024-10-07 09:26:55.903964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.385 #32 NEW cov: 12395 ft: 13854 corp: 8/102b lim: 40 exec/s: 0 rss: 74Mb L: 14/22 MS: 3 CrossOver-ChangeBit-CrossOver- 00:08:00.385 [2024-10-07 09:26:55.944087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff03ff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.385 [2024-10-07 09:26:55.944112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.644 #33 NEW cov: 12395 ft: 13900 corp: 9/111b lim: 40 exec/s: 0 rss: 74Mb L: 9/22 MS: 1 PersAutoDict- DE: "\377\377\377\003"- 00:08:00.644 [2024-10-07 09:26:55.984160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e0affff cdw11:ff03ff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.644 [2024-10-07 09:26:55.984185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.644 #34 NEW cov: 12395 ft: 13929 corp: 10/120b lim: 40 exec/s: 0 rss: 74Mb L: 9/22 MS: 1 CrossOver- 00:08:00.644 [2024-10-07 09:26:56.044333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0317 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.644 [2024-10-07 09:26:56.044360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.644 #35 NEW cov: 12395 ft: 13951 corp: 11/130b lim: 40 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 CopyPart- 00:08:00.644 [2024-10-07 09:26:56.104509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff0b00 cdw11:00000317 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.644 [2024-10-07 09:26:56.104535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.644 #36 NEW cov: 12395 ft: 13955 corp: 12/140b lim: 40 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 ChangeBinInt- 00:08:00.644 [2024-10-07 09:26:56.164802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8e000000 cdw11:0000009e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.644 [2024-10-07 09:26:56.164832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.644 [2024-10-07 09:26:56.164895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.644 [2024-10-07 09:26:56.164908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.903 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:00.903 #37 NEW cov: 12418 ft: 14012 corp: 13/160b lim: 40 exec/s: 0 rss: 74Mb L: 20/22 MS: 1 InsertRepeatedBytes- 00:08:00.903 [2024-10-07 09:26:56.224861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff7eff cdw11:0b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.224886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.903 #38 NEW cov: 12418 ft: 14062 corp: 14/172b lim: 40 exec/s: 38 rss: 74Mb L: 12/22 MS: 1 CMP- DE: "\377~"- 00:08:00.903 [2024-10-07 09:26:56.285144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff1700 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.285170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.903 [2024-10-07 09:26:56.285233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fbffff03 cdw11:1703ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.285248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.903 #47 NEW cov: 12418 ft: 14157 corp: 15/188b lim: 40 exec/s: 47 rss: 74Mb L: 16/22 MS: 4 EraseBytes-ChangeBit-ShuffleBytes-CrossOver- 00:08:00.903 [2024-10-07 09:26:56.325125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff03ff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.325150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.903 #48 NEW cov: 12418 ft: 14225 corp: 16/197b lim: 40 exec/s: 48 rss: 74Mb L: 9/22 MS: 1 ShuffleBytes- 00:08:00.903 [2024-10-07 09:26:56.365221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fffffffd cdw11:16ffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.365247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.903 #49 NEW cov: 12418 ft: 14240 corp: 17/207b lim: 40 exec/s: 49 rss: 74Mb L: 10/22 MS: 1 ChangeBinInt- 00:08:00.903 [2024-10-07 09:26:56.405500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8e000000 cdw11:0000009e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.405527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.903 [2024-10-07 09:26:56.405589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:8e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.405602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.903 #50 NEW cov: 12418 ft: 14262 corp: 18/227b lim: 40 exec/s: 50 rss: 75Mb L: 20/22 MS: 1 ChangeBit- 00:08:00.903 [2024-10-07 09:26:56.465571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffff7e cdw11:ffffff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.903 [2024-10-07 09:26:56.465595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.163 #51 NEW cov: 12418 ft: 14318 corp: 19/238b lim: 40 exec/s: 51 rss: 75Mb L: 11/22 MS: 1 PersAutoDict- DE: "\377~"- 00:08:01.163 [2024-10-07 09:26:56.505783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9eff7e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.163 [2024-10-07 09:26:56.505808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.163 [2024-10-07 09:26:56.505896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.163 [2024-10-07 09:26:56.505910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.163 #52 NEW cov: 12418 ft: 14359 corp: 20/257b lim: 40 exec/s: 52 rss: 75Mb L: 19/22 MS: 1 PersAutoDict- DE: "\377~"- 00:08:01.163 [2024-10-07 09:26:56.565867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e0affff cdw11:ff03ff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.163 [2024-10-07 09:26:56.565893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.163 #53 NEW cov: 12418 ft: 14378 corp: 21/266b lim: 40 exec/s: 53 rss: 75Mb L: 9/22 MS: 1 ShuffleBytes- 00:08:01.163 [2024-10-07 09:26:56.625999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:be0affff cdw11:ff03ff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.163 [2024-10-07 09:26:56.626025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.163 #54 NEW cov: 12418 ft: 14396 corp: 22/275b lim: 40 exec/s: 54 rss: 75Mb L: 9/22 MS: 1 ChangeBit- 00:08:01.163 [2024-10-07 09:26:56.686308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9eff7e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.163 [2024-10-07 09:26:56.686333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.163 [2024-10-07 09:26:56.686411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e34 cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.163 [2024-10-07 09:26:56.686425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.423 #55 NEW cov: 12418 ft: 14454 corp: 23/294b lim: 40 exec/s: 55 rss: 75Mb L: 19/22 MS: 1 ChangeByte- 00:08:01.423 [2024-10-07 09:26:56.746487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.746512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.423 [2024-10-07 09:26:56.746576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff7e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.746590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.423 #56 NEW cov: 12418 ft: 14464 corp: 24/315b lim: 40 exec/s: 56 rss: 75Mb L: 21/22 MS: 1 PersAutoDict- DE: "\377~"- 00:08:01.423 [2024-10-07 09:26:56.786572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.786598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.423 [2024-10-07 09:26:56.786678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.786692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.423 #57 NEW cov: 12418 ft: 14511 corp: 25/334b lim: 40 exec/s: 57 rss: 75Mb L: 19/22 MS: 1 CopyPart- 00:08:01.423 [2024-10-07 09:26:56.826568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e0aff0a cdw11:ffffff03 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.826595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.423 #58 NEW cov: 12418 ft: 14519 corp: 26/343b lim: 40 exec/s: 58 rss: 75Mb L: 9/22 MS: 1 CopyPart- 00:08:01.423 [2024-10-07 09:26:56.866823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:be0affff cdw11:be0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.866848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.423 [2024-10-07 09:26:56.866931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff03ff03 cdw11:17ff03ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.866945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.423 #59 NEW cov: 12418 ft: 14550 corp: 27/361b lim: 40 exec/s: 59 rss: 75Mb L: 18/22 MS: 1 CopyPart- 00:08:01.423 [2024-10-07 09:26:56.927012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8e000000 cdw11:009e9e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.927038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.423 [2024-10-07 09:26:56.927103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9e9e009e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.927117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.423 #60 NEW cov: 12418 ft: 14586 corp: 28/381b lim: 40 exec/s: 60 rss: 75Mb L: 20/22 MS: 1 ShuffleBytes- 00:08:01.423 [2024-10-07 09:26:56.967084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff1700 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.967111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.423 [2024-10-07 09:26:56.967198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fbffff03 cdw11:1703c9ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.423 [2024-10-07 09:26:56.967214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.782 #61 NEW cov: 12418 ft: 14592 corp: 29/397b lim: 40 exec/s: 61 rss: 75Mb L: 16/22 MS: 1 ChangeByte- 00:08:01.782 [2024-10-07 09:26:57.027390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e0aff cdw11:ffff039e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.027417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.782 [2024-10-07 09:26:57.027500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff9e9e9e cdw11:9e03179e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.027514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.782 [2024-10-07 09:26:57.027581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:9eff7e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.027595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.782 #62 NEW cov: 12418 ft: 14867 corp: 30/427b lim: 40 exec/s: 62 rss: 75Mb L: 30/30 MS: 1 CrossOver- 00:08:01.782 [2024-10-07 09:26:57.087304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:fffffffd cdw11:16ffff40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.087330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.782 #63 NEW cov: 12418 ft: 14877 corp: 31/437b lim: 40 exec/s: 63 rss: 75Mb L: 10/30 MS: 1 ChangeBit- 00:08:01.782 [2024-10-07 09:26:57.147638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:be0affff cdw11:be0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.147664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.782 [2024-10-07 09:26:57.147727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff03ff03 cdw11:17ff03ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.147741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.782 #64 NEW cov: 12418 ft: 14887 corp: 32/455b lim: 40 exec/s: 64 rss: 75Mb L: 18/30 MS: 1 PersAutoDict- DE: "\377~"- 00:08:01.782 [2024-10-07 09:26:57.207671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9e9e9e9e cdw11:9e9e9e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.207697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.782 #65 NEW cov: 12418 ft: 14900 corp: 33/465b lim: 40 exec/s: 65 rss: 75Mb L: 10/30 MS: 1 EraseBytes- 00:08:01.782 [2024-10-07 09:26:57.247905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:be0affff cdw11:ffff0317 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.247930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.782 [2024-10-07 09:26:57.247994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0317 cdw11:17ff03ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.782 [2024-10-07 09:26:57.248009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.782 #66 NEW cov: 12418 ft: 14915 corp: 34/483b lim: 40 exec/s: 33 rss: 75Mb L: 18/30 MS: 1 CrossOver- 00:08:01.782 #66 DONE cov: 12418 ft: 14915 corp: 34/483b lim: 40 exec/s: 33 rss: 75Mb 00:08:01.782 ###### Recommended dictionary. ###### 00:08:01.782 "\377\377\377\003" # Uses: 1 00:08:01.782 "\377~" # Uses: 4 00:08:01.782 ###### End of recommended dictionary. ###### 00:08:01.782 Done 66 runs in 2 second(s) 00:08:02.146 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:02.146 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:02.147 09:26:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:02.147 [2024-10-07 09:26:57.473771] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:02.147 [2024-10-07 09:26:57.473850] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490662 ] 00:08:02.407 [2024-10-07 09:26:57.782800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.407 [2024-10-07 09:26:57.877548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.407 [2024-10-07 09:26:57.936675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.407 [2024-10-07 09:26:57.952886] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:02.407 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.407 INFO: Seed: 4215891481 00:08:02.667 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:02.667 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:02.667 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:02.667 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.667 #2 INITED exec/s: 0 rss: 67Mb 00:08:02.667 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.667 This may also happen if the target rejected all inputs we tried so far 00:08:02.667 [2024-10-07 09:26:58.008723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.667 [2024-10-07 09:26:58.008751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.667 [2024-10-07 09:26:58.008817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.667 [2024-10-07 09:26:58.008832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.667 [2024-10-07 09:26:58.008910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.667 [2024-10-07 09:26:58.008924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.926 NEW_FUNC[1/711]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:02.926 NEW_FUNC[2/711]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.926 #7 NEW cov: 12174 ft: 12174 corp: 2/28b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 5 CopyPart-CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:08:02.926 [2024-10-07 09:26:58.329603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.329641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.926 [2024-10-07 09:26:58.329697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.329711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.926 [2024-10-07 09:26:58.329765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.329778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.926 [2024-10-07 09:26:58.329838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.329852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.926 NEW_FUNC[1/4]: 0x1f3d688 in spdk_thread_is_exited /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:751 00:08:02.926 NEW_FUNC[2/4]: 0x1f3e418 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:820 00:08:02.926 #8 NEW cov: 12316 ft: 13135 corp: 3/62b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:08:02.926 [2024-10-07 09:26:58.389623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.389650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.926 [2024-10-07 09:26:58.389722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.389736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.926 [2024-10-07 09:26:58.389792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.389808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.926 [2024-10-07 09:26:58.389868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.926 [2024-10-07 09:26:58.389882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.926 #9 NEW cov: 12322 ft: 13354 corp: 4/96b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:02.926 [2024-10-07 09:26:58.429745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.927 [2024-10-07 09:26:58.429774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.927 [2024-10-07 09:26:58.429850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:86ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.927 [2024-10-07 09:26:58.429865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.927 [2024-10-07 09:26:58.429931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.927 [2024-10-07 09:26:58.429945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.927 [2024-10-07 09:26:58.429999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.927 [2024-10-07 09:26:58.430012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.927 #10 NEW cov: 12407 ft: 13523 corp: 5/130b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeByte- 00:08:02.927 [2024-10-07 09:26:58.489494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a240a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.927 [2024-10-07 09:26:58.489521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.186 #11 NEW cov: 12407 ft: 14311 corp: 6/143b lim: 40 exec/s: 0 rss: 74Mb L: 13/34 MS: 1 CrossOver- 00:08:03.186 [2024-10-07 09:26:58.529513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.529539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.186 #12 NEW cov: 12407 ft: 14407 corp: 7/152b lim: 40 exec/s: 0 rss: 74Mb L: 9/34 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:03.186 [2024-10-07 09:26:58.569641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0aff30ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.569668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.186 #13 NEW cov: 12407 ft: 14501 corp: 8/161b lim: 40 exec/s: 0 rss: 74Mb L: 9/34 MS: 1 ChangeByte- 00:08:03.186 [2024-10-07 09:26:58.630288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.630314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.186 [2024-10-07 09:26:58.630371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.630385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.186 [2024-10-07 09:26:58.630461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.630476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.186 [2024-10-07 09:26:58.630531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.630545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.186 #14 NEW cov: 12407 ft: 14518 corp: 9/196b lim: 40 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:03.186 [2024-10-07 09:26:58.669922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0aff30fd cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.669947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.186 #15 NEW cov: 12407 ft: 14552 corp: 10/205b lim: 40 exec/s: 0 rss: 75Mb L: 9/35 MS: 1 ChangeBit- 00:08:03.186 [2024-10-07 09:26:58.730543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240a30 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.730568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.186 [2024-10-07 09:26:58.730627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.730640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.186 [2024-10-07 09:26:58.730695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.730708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.186 [2024-10-07 09:26:58.730763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.186 [2024-10-07 09:26:58.730776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.446 #16 NEW cov: 12407 ft: 14632 corp: 11/239b lim: 40 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:08:03.446 [2024-10-07 09:26:58.790713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240a30 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.790738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.446 [2024-10-07 09:26:58.790794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.790807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.446 [2024-10-07 09:26:58.790866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:86ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.790880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.446 [2024-10-07 09:26:58.790932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.790946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.446 #17 NEW cov: 12407 ft: 14649 corp: 12/273b lim: 40 exec/s: 0 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:08:03.446 [2024-10-07 09:26:58.850420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0aff30ff cdw11:ffff7fff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.850444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.446 #18 NEW cov: 12407 ft: 14678 corp: 13/282b lim: 40 exec/s: 0 rss: 75Mb L: 9/35 MS: 1 ChangeBit- 00:08:03.446 [2024-10-07 09:26:58.890491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.890515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.446 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:03.446 #19 NEW cov: 12430 ft: 14730 corp: 14/291b lim: 40 exec/s: 0 rss: 75Mb L: 9/35 MS: 1 ShuffleBytes- 00:08:03.446 [2024-10-07 09:26:58.930762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.930786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.446 [2024-10-07 09:26:58.930862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.930877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.446 #20 NEW cov: 12430 ft: 14992 corp: 15/308b lim: 40 exec/s: 0 rss: 75Mb L: 17/35 MS: 1 CrossOver- 00:08:03.446 [2024-10-07 09:26:58.991138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2431ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.991164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.446 [2024-10-07 09:26:58.991239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.991253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.446 [2024-10-07 09:26:58.991309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.446 [2024-10-07 09:26:58.991322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.706 #21 NEW cov: 12430 ft: 15058 corp: 16/335b lim: 40 exec/s: 21 rss: 75Mb L: 27/35 MS: 1 ChangeByte- 00:08:03.706 [2024-10-07 09:26:59.031397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2431ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.031422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.706 [2024-10-07 09:26:59.031496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.031511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.706 [2024-10-07 09:26:59.031568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.031581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.706 [2024-10-07 09:26:59.031644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.031657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.706 #22 NEW cov: 12430 ft: 15082 corp: 17/370b lim: 40 exec/s: 22 rss: 75Mb L: 35/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:03.706 [2024-10-07 09:26:59.091254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0aff30ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.091279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.706 [2024-10-07 09:26:59.091352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.091367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.706 #23 NEW cov: 12430 ft: 15091 corp: 18/387b lim: 40 exec/s: 23 rss: 75Mb L: 17/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:03.706 [2024-10-07 09:26:59.131345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2cffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.131370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.706 [2024-10-07 09:26:59.131426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.131440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.706 #24 NEW cov: 12430 ft: 15095 corp: 19/404b lim: 40 exec/s: 24 rss: 75Mb L: 17/35 MS: 1 ChangeByte- 00:08:03.706 [2024-10-07 09:26:59.191374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.191399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.706 #25 NEW cov: 12430 ft: 15171 corp: 20/413b lim: 40 exec/s: 25 rss: 75Mb L: 9/35 MS: 1 CopyPart- 00:08:03.706 [2024-10-07 09:26:59.231487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0aff30ff cdw11:fffffbff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.706 [2024-10-07 09:26:59.231512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.706 #26 NEW cov: 12430 ft: 15200 corp: 21/422b lim: 40 exec/s: 26 rss: 75Mb L: 9/35 MS: 1 ChangeBinInt- 00:08:03.966 [2024-10-07 09:26:59.271584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a240a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.271611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.966 #27 NEW cov: 12430 ft: 15234 corp: 22/435b lim: 40 exec/s: 27 rss: 75Mb L: 13/35 MS: 1 CMP- DE: "\377\377\377\007"- 00:08:03.966 [2024-10-07 09:26:59.332356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.332381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.332455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:acacffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.332470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.332530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.332543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.332597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffe2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.332611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.332666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:e2e2e2e2 cdw11:e2ffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.332679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.966 #28 NEW cov: 12430 ft: 15319 corp: 23/475b lim: 40 exec/s: 28 rss: 75Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:03.966 [2024-10-07 09:26:59.392359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240a30 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.392384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.392458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.392472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.392530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:b3ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.392543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.392599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.392612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.966 #29 NEW cov: 12430 ft: 15366 corp: 24/509b lim: 40 exec/s: 29 rss: 75Mb L: 34/40 MS: 1 ChangeByte- 00:08:03.966 [2024-10-07 09:26:59.432498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.432522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.432596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dfffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.432611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.432668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.432681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.966 [2024-10-07 09:26:59.432737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.432750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.966 #30 NEW cov: 12430 ft: 15447 corp: 25/543b lim: 40 exec/s: 30 rss: 75Mb L: 34/40 MS: 1 ChangeBit- 00:08:03.966 [2024-10-07 09:26:59.472176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a240a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.472204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.966 #31 NEW cov: 12430 ft: 15450 corp: 26/556b lim: 40 exec/s: 31 rss: 75Mb L: 13/40 MS: 1 CrossOver- 00:08:03.966 [2024-10-07 09:26:59.512300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.966 [2024-10-07 09:26:59.512325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.226 #34 NEW cov: 12430 ft: 15475 corp: 27/565b lim: 40 exec/s: 34 rss: 75Mb L: 9/40 MS: 3 EraseBytes-ChangeBit-CopyPart- 00:08:04.226 [2024-10-07 09:26:59.552558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.226 [2024-10-07 09:26:59.552584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.226 [2024-10-07 09:26:59.552641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.226 [2024-10-07 09:26:59.552655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.226 #35 NEW cov: 12430 ft: 15506 corp: 28/582b lim: 40 exec/s: 35 rss: 75Mb L: 17/40 MS: 1 CrossOver- 00:08:04.226 [2024-10-07 09:26:59.612908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.226 [2024-10-07 09:26:59.612932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.226 [2024-10-07 09:26:59.612992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.226 [2024-10-07 09:26:59.613005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.226 [2024-10-07 09:26:59.613059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.226 [2024-10-07 09:26:59.613072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.227 #36 NEW cov: 12430 ft: 15517 corp: 29/611b lim: 40 exec/s: 36 rss: 75Mb L: 29/40 MS: 1 CrossOver- 00:08:04.227 [2024-10-07 09:26:59.653179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240a30 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.653205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.227 [2024-10-07 09:26:59.653262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.653275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.227 [2024-10-07 09:26:59.653331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:86ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.653344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.227 [2024-10-07 09:26:59.653400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.653417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.227 #37 NEW cov: 12430 ft: 15539 corp: 30/645b lim: 40 exec/s: 37 rss: 75Mb L: 34/40 MS: 1 ShuffleBytes- 00:08:04.227 [2024-10-07 09:26:59.713378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.713403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.227 [2024-10-07 09:26:59.713461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.713475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.227 [2024-10-07 09:26:59.713529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.713543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.227 [2024-10-07 09:26:59.713600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.713614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.227 #38 NEW cov: 12430 ft: 15545 corp: 31/680b lim: 40 exec/s: 38 rss: 75Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:08:04.227 [2024-10-07 09:26:59.773094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2cff44ff cdw11:ffff0aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.227 [2024-10-07 09:26:59.773119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.487 #42 NEW cov: 12430 ft: 15562 corp: 32/688b lim: 40 exec/s: 42 rss: 76Mb L: 8/40 MS: 4 EraseBytes-ShuffleBytes-InsertByte-InsertByte- 00:08:04.487 [2024-10-07 09:26:59.833671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240aff cdw11:ff0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.833698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.833756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffac cdw11:acacacff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.833770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.833832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffff0aff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.833846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.833902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffacff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.833915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.487 #43 NEW cov: 12430 ft: 15568 corp: 33/722b lim: 40 exec/s: 43 rss: 76Mb L: 34/40 MS: 1 CrossOver- 00:08:04.487 [2024-10-07 09:26:59.893969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a240a30 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.893995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.894055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.894069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.894126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.894139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.894194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffb3ff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.894207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.894263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000ff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.894276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:04.487 #44 NEW cov: 12430 ft: 15573 corp: 34/762b lim: 40 exec/s: 44 rss: 76Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:04.487 [2024-10-07 09:26:59.954024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:240e30ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.954049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.954105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff86 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.954118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.954174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.954187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.487 [2024-10-07 09:26:59.954240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.487 [2024-10-07 09:26:59.954252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.487 #49 NEW cov: 12430 ft: 15588 corp: 35/796b lim: 40 exec/s: 24 rss: 76Mb L: 34/40 MS: 5 CrossOver-ChangeBit-ShuffleBytes-EraseBytes-CrossOver- 00:08:04.487 #49 DONE cov: 12430 ft: 15588 corp: 35/796b lim: 40 exec/s: 24 rss: 76Mb 00:08:04.487 ###### Recommended dictionary. ###### 00:08:04.487 "\377\377\377\377\377\377\377\377" # Uses: 2 00:08:04.487 "\377\377\377\007" # Uses: 0 00:08:04.487 ###### End of recommended dictionary. ###### 00:08:04.487 Done 49 runs in 2 second(s) 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:04.746 09:27:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:04.746 [2024-10-07 09:27:00.170572] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:04.746 [2024-10-07 09:27:00.170653] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491039 ] 00:08:05.005 [2024-10-07 09:27:00.442044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.005 [2024-10-07 09:27:00.530205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.263 [2024-10-07 09:27:00.589920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.263 [2024-10-07 09:27:00.606144] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:05.263 INFO: Running with entropic power schedule (0xFF, 100). 00:08:05.263 INFO: Seed: 2572939075 00:08:05.263 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:05.263 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:05.263 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:05.263 INFO: A corpus is not provided, starting from an empty corpus 00:08:05.263 #2 INITED exec/s: 0 rss: 67Mb 00:08:05.263 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:05.263 This may also happen if the target rejected all inputs we tried so far 00:08:05.263 [2024-10-07 09:27:00.655691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.264 [2024-10-07 09:27:00.655721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.264 [2024-10-07 09:27:00.655778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.264 [2024-10-07 09:27:00.655793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.264 [2024-10-07 09:27:00.655849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.264 [2024-10-07 09:27:00.655865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.523 NEW_FUNC[1/714]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:05.523 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.523 #13 NEW cov: 12185 ft: 12173 corp: 2/26b lim: 40 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:05.523 [2024-10-07 09:27:01.006573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.523 [2024-10-07 09:27:01.006612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.523 [2024-10-07 09:27:01.006670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.523 [2024-10-07 09:27:01.006684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.523 [2024-10-07 09:27:01.006741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ac000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.523 [2024-10-07 09:27:01.006756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.523 NEW_FUNC[1/1]: 0x1f8f2a8 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:957 00:08:05.523 #14 NEW cov: 12314 ft: 12803 corp: 3/55b lim: 40 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 CMP- DE: "\000\000\002\000"- 00:08:05.523 [2024-10-07 09:27:01.076917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.523 [2024-10-07 09:27:01.076948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.523 [2024-10-07 09:27:01.077006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.523 [2024-10-07 09:27:01.077021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.523 [2024-10-07 09:27:01.077079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.524 [2024-10-07 09:27:01.077094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.524 [2024-10-07 09:27:01.077151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.524 [2024-10-07 09:27:01.077164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.783 #15 NEW cov: 12320 ft: 13301 corp: 4/93b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:05.783 [2024-10-07 09:27:01.116808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.116849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.116909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.116926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.116985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.116999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.783 #16 NEW cov: 12405 ft: 13549 corp: 5/118b lim: 40 exec/s: 0 rss: 74Mb L: 25/38 MS: 1 CopyPart- 00:08:05.783 [2024-10-07 09:27:01.157103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.157133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.157192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.157208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.157266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.157280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.157337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.157351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.783 #17 NEW cov: 12405 ft: 13673 corp: 6/156b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 CMP- DE: "\001\002\000\000"- 00:08:05.783 [2024-10-07 09:27:01.216793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3a0a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.216829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.783 #20 NEW cov: 12405 ft: 14494 corp: 7/168b lim: 40 exec/s: 0 rss: 74Mb L: 12/38 MS: 3 InsertByte-ChangeByte-CrossOver- 00:08:05.783 [2024-10-07 09:27:01.257178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.257206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.257267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.257283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.783 [2024-10-07 09:27:01.257340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ac400002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.257371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.783 #21 NEW cov: 12405 ft: 14561 corp: 8/197b lim: 40 exec/s: 0 rss: 74Mb L: 29/38 MS: 1 ChangeBit- 00:08:05.783 [2024-10-07 09:27:01.317434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.783 [2024-10-07 09:27:01.317464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.784 [2024-10-07 09:27:01.317523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.784 [2024-10-07 09:27:01.317538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.784 [2024-10-07 09:27:01.317597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ac400002 cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.784 [2024-10-07 09:27:01.317616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.044 #22 NEW cov: 12405 ft: 14680 corp: 9/226b lim: 40 exec/s: 0 rss: 75Mb L: 29/38 MS: 1 CopyPart- 00:08:06.044 [2024-10-07 09:27:01.377740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.377768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.377827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.377842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.377902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.377917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.377974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.377987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.044 #23 NEW cov: 12405 ft: 14720 corp: 10/264b lim: 40 exec/s: 0 rss: 75Mb L: 38/38 MS: 1 ChangeBit- 00:08:06.044 [2024-10-07 09:27:01.417642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.417668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.417727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.417742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.417800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.417818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.044 #25 NEW cov: 12405 ft: 14841 corp: 11/288b lim: 40 exec/s: 0 rss: 75Mb L: 24/38 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:06.044 [2024-10-07 09:27:01.457778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.457805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.457867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ac010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.457882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.457940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.457955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.044 #26 NEW cov: 12405 ft: 14874 corp: 12/317b lim: 40 exec/s: 0 rss: 75Mb L: 29/38 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:06.044 [2024-10-07 09:27:01.498014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.498043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.498102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.498116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.498173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.498187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.498243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.498256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.044 #27 NEW cov: 12405 ft: 14935 corp: 13/355b lim: 40 exec/s: 0 rss: 75Mb L: 38/38 MS: 1 ChangeBit- 00:08:06.044 [2024-10-07 09:27:01.538168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.538197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.538256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.538272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.538328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.538343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.538401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:acacac40 cdw11:000200ac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.538416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.044 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:06.044 #28 NEW cov: 12428 ft: 14979 corp: 14/394b lim: 40 exec/s: 0 rss: 75Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:08:06.044 [2024-10-07 09:27:01.578096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.578123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.578181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ac010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.578196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.044 [2024-10-07 09:27:01.578254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.044 [2024-10-07 09:27:01.578269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.304 #29 NEW cov: 12428 ft: 15024 corp: 15/423b lim: 40 exec/s: 0 rss: 75Mb L: 29/39 MS: 1 ChangeBinInt- 00:08:06.304 [2024-10-07 09:27:01.638272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:ac000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.638298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.638359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00acacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.638374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.638434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ac000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.638448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.304 #30 NEW cov: 12428 ft: 15072 corp: 16/452b lim: 40 exec/s: 30 rss: 75Mb L: 29/39 MS: 1 CopyPart- 00:08:06.304 [2024-10-07 09:27:01.678396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.678424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.678485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ac010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.678500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.678559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00010200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.678574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.304 #31 NEW cov: 12428 ft: 15095 corp: 17/481b lim: 40 exec/s: 31 rss: 75Mb L: 29/39 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:08:06.304 [2024-10-07 09:27:01.718146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3a0a0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.718172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.304 #32 NEW cov: 12428 ft: 15155 corp: 18/494b lim: 40 exec/s: 32 rss: 75Mb L: 13/39 MS: 1 InsertByte- 00:08:06.304 [2024-10-07 09:27:01.778662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.778689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.778744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.778759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.778818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.778833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.304 #33 NEW cov: 12428 ft: 15180 corp: 19/519b lim: 40 exec/s: 33 rss: 75Mb L: 25/39 MS: 1 ShuffleBytes- 00:08:06.304 [2024-10-07 09:27:01.839141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:ac2bacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.839171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.839227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.839240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.839295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.839309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.839379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:acacacac cdw11:40000200 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.839394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.304 [2024-10-07 09:27:01.839449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.304 [2024-10-07 09:27:01.839463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:06.564 #34 NEW cov: 12428 ft: 15232 corp: 20/559b lim: 40 exec/s: 34 rss: 75Mb L: 40/40 MS: 1 InsertByte- 00:08:06.564 [2024-10-07 09:27:01.899023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacac00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.899049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:01.899107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00ac0002 cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.899122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:01.899177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ac000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.899191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.564 #35 NEW cov: 12428 ft: 15250 corp: 21/588b lim: 40 exec/s: 35 rss: 75Mb L: 29/40 MS: 1 ShuffleBytes- 00:08:06.564 [2024-10-07 09:27:01.959356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.959382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:01.959439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.959453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:01.959508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.959523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:01.959578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:01.959594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.564 #36 NEW cov: 12428 ft: 15276 corp: 22/626b lim: 40 exec/s: 36 rss: 75Mb L: 38/40 MS: 1 ShuffleBytes- 00:08:06.564 [2024-10-07 09:27:02.019348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:02.019374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:02.019430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acac8b8b cdw11:8bacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:02.019444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:02.019499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:02.019513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.564 #37 NEW cov: 12428 ft: 15334 corp: 23/654b lim: 40 exec/s: 37 rss: 75Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:08:06.564 [2024-10-07 09:27:02.059599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01020000 cdw11:0aacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.564 [2024-10-07 09:27:02.059624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.564 [2024-10-07 09:27:02.059678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ac000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.059692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.565 [2024-10-07 09:27:02.059746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:ac000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.059760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.565 [2024-10-07 09:27:02.059819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00acacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.059833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.565 #38 NEW cov: 12428 ft: 15353 corp: 24/687b lim: 40 exec/s: 38 rss: 75Mb L: 33/40 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:08:06.565 [2024-10-07 09:27:02.099713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.099740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.565 [2024-10-07 09:27:02.099799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.099817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.565 [2024-10-07 09:27:02.099874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.099904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.565 [2024-10-07 09:27:02.099961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:acacb340 cdw11:000200ac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.565 [2024-10-07 09:27:02.099979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.565 #39 NEW cov: 12428 ft: 15411 corp: 25/726b lim: 40 exec/s: 39 rss: 75Mb L: 39/40 MS: 1 ChangeBinInt- 00:08:06.824 [2024-10-07 09:27:02.139826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01020000 cdw11:f6535353 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.139853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.824 [2024-10-07 09:27:02.139912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:53f80002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.139927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.824 [2024-10-07 09:27:02.139984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:ac000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.139998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.824 [2024-10-07 09:27:02.140052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00acacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.140065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.824 #40 NEW cov: 12428 ft: 15418 corp: 26/759b lim: 40 exec/s: 40 rss: 75Mb L: 33/40 MS: 1 ChangeBinInt- 00:08:06.824 [2024-10-07 09:27:02.199519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01020a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.199545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.824 #41 NEW cov: 12428 ft: 15476 corp: 27/774b lim: 40 exec/s: 41 rss: 75Mb L: 15/40 MS: 1 CrossOver- 00:08:06.824 [2024-10-07 09:27:02.260167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.260194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.824 [2024-10-07 09:27:02.260252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acac0102 cdw11:0000acac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.824 [2024-10-07 09:27:02.260267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.824 [2024-10-07 09:27:02.260321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:ac000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.260336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.825 [2024-10-07 09:27:02.260390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00acacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.260404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.825 #42 NEW cov: 12428 ft: 15493 corp: 28/807b lim: 40 exec/s: 42 rss: 75Mb L: 33/40 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:08:06.825 [2024-10-07 09:27:02.300309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.300337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.825 [2024-10-07 09:27:02.300398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ac010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.300413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.825 [2024-10-07 09:27:02.300468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.300482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.825 [2024-10-07 09:27:02.300537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00010200 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.300550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.825 #43 NEW cov: 12428 ft: 15516 corp: 29/840b lim: 40 exec/s: 43 rss: 75Mb L: 33/40 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:06.825 [2024-10-07 09:27:02.360122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00010200 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.360153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.825 [2024-10-07 09:27:02.360212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.825 [2024-10-07 09:27:02.360227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.085 #48 NEW cov: 12428 ft: 15722 corp: 30/856b lim: 40 exec/s: 48 rss: 76Mb L: 16/40 MS: 5 EraseBytes-CrossOver-PersAutoDict-CrossOver-InsertRepeatedBytes- DE: "\000\000\002\000"- 00:08:07.085 [2024-10-07 09:27:02.420582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.420614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.420673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.420687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.420743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.420758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.420819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.420833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.085 #49 NEW cov: 12428 ft: 15747 corp: 31/894b lim: 40 exec/s: 49 rss: 76Mb L: 38/40 MS: 1 CopyPart- 00:08:07.085 [2024-10-07 09:27:02.480642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.480673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.480732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacac00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.480751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.480806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000200ac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.480828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.085 #50 NEW cov: 12428 ft: 15752 corp: 32/921b lim: 40 exec/s: 50 rss: 76Mb L: 27/40 MS: 1 EraseBytes- 00:08:07.085 [2024-10-07 09:27:02.520923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.520953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.521013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acac8b8b cdw11:8bacac01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.521030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.521088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000ac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.521102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.521158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.521173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.085 #51 NEW cov: 12428 ft: 15783 corp: 33/953b lim: 40 exec/s: 51 rss: 76Mb L: 32/40 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:07.085 [2024-10-07 09:27:02.580742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.580773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.580834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.580849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.085 #57 NEW cov: 12428 ft: 15795 corp: 34/969b lim: 40 exec/s: 57 rss: 76Mb L: 16/40 MS: 1 CrossOver- 00:08:07.085 [2024-10-07 09:27:02.641032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.641060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.641119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ac010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.641133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.085 [2024-10-07 09:27:02.641187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00acacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:07.085 [2024-10-07 09:27:02.641201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.345 #58 NEW cov: 12428 ft: 15800 corp: 35/998b lim: 40 exec/s: 29 rss: 76Mb L: 29/40 MS: 1 ShuffleBytes- 00:08:07.345 #58 DONE cov: 12428 ft: 15800 corp: 35/998b lim: 40 exec/s: 29 rss: 76Mb 00:08:07.345 ###### Recommended dictionary. ###### 00:08:07.345 "\000\000\002\000" # Uses: 1 00:08:07.345 "\001\002\000\000" # Uses: 3 00:08:07.345 "\001\000\000\000" # Uses: 2 00:08:07.345 ###### End of recommended dictionary. ###### 00:08:07.345 Done 58 runs in 2 second(s) 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:07.345 09:27:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:07.345 [2024-10-07 09:27:02.852544] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:07.345 [2024-10-07 09:27:02.852622] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491509 ] 00:08:07.913 [2024-10-07 09:27:03.169597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.913 [2024-10-07 09:27:03.264269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.913 [2024-10-07 09:27:03.323425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.913 [2024-10-07 09:27:03.339629] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:07.913 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.913 INFO: Seed: 1012952698 00:08:07.913 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:07.913 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:07.913 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:07.913 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.913 #2 INITED exec/s: 0 rss: 67Mb 00:08:07.913 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.913 This may also happen if the target rejected all inputs we tried so far 00:08:07.913 [2024-10-07 09:27:03.417263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.913 [2024-10-07 09:27:03.417311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.913 [2024-10-07 09:27:03.417460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.914 [2024-10-07 09:27:03.417479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.914 [2024-10-07 09:27:03.417597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.914 [2024-10-07 09:27:03.417616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.482 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:08.482 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:08.482 #38 NEW cov: 12171 ft: 12169 corp: 2/27b lim: 40 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:08:08.482 [2024-10-07 09:27:03.769262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.769312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.769423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.769445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.769549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.769569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.769675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.769696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.769801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.769830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.482 #39 NEW cov: 12301 ft: 13207 corp: 3/67b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:08.482 [2024-10-07 09:27:03.849207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.849241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.849334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.849352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.849446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.849465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.849563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.849578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.482 #40 NEW cov: 12307 ft: 13559 corp: 4/106b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:08:08.482 [2024-10-07 09:27:03.899151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.899179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.899272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.899291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.899384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.899399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.482 #41 NEW cov: 12392 ft: 13802 corp: 5/133b lim: 40 exec/s: 0 rss: 74Mb L: 27/40 MS: 1 InsertByte- 00:08:08.482 [2024-10-07 09:27:03.949707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.949734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.949836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff03 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.949853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.949952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.949968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.482 [2024-10-07 09:27:03.950066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:03.950082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.482 #42 NEW cov: 12392 ft: 13892 corp: 6/172b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:08:08.482 [2024-10-07 09:27:04.019956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.482 [2024-10-07 09:27:04.019984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.483 [2024-10-07 09:27:04.020085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.483 [2024-10-07 09:27:04.020101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.483 [2024-10-07 09:27:04.020197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.483 [2024-10-07 09:27:04.020216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.483 [2024-10-07 09:27:04.020318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.483 [2024-10-07 09:27:04.020334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.741 #43 NEW cov: 12392 ft: 13947 corp: 7/204b lim: 40 exec/s: 0 rss: 74Mb L: 32/40 MS: 1 EraseBytes- 00:08:08.741 [2024-10-07 09:27:04.090443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.090469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.090573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff03 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.090589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.090702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.090719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.090811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002c0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.090833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.090940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.090955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.741 #44 NEW cov: 12392 ft: 13999 corp: 8/244b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertByte- 00:08:08.741 [2024-10-07 09:27:04.160203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff30ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.160229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.160335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.160351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.160452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.160469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.741 #45 NEW cov: 12392 ft: 14017 corp: 9/271b lim: 40 exec/s: 0 rss: 74Mb L: 27/40 MS: 1 InsertByte- 00:08:08.741 [2024-10-07 09:27:04.210933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.210960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.211068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff03 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.211087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.211180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.741 [2024-10-07 09:27:04.211198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.741 [2024-10-07 09:27:04.211295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:fff6ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.742 [2024-10-07 09:27:04.211312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.742 [2024-10-07 09:27:04.211406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.742 [2024-10-07 09:27:04.211423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.742 #46 NEW cov: 12392 ft: 14078 corp: 10/311b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertByte- 00:08:08.742 [2024-10-07 09:27:04.260680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.742 [2024-10-07 09:27:04.260707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.742 [2024-10-07 09:27:04.260815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fff6ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.742 [2024-10-07 09:27:04.260833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.742 [2024-10-07 09:27:04.260926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.742 [2024-10-07 09:27:04.260942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.001 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:09.001 #47 NEW cov: 12415 ft: 14193 corp: 11/335b lim: 40 exec/s: 0 rss: 74Mb L: 24/40 MS: 1 EraseBytes- 00:08:09.001 [2024-10-07 09:27:04.331161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.331189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.331291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.331309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.331404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.331420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.331517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.331533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.001 #48 NEW cov: 12415 ft: 14258 corp: 12/374b lim: 40 exec/s: 0 rss: 74Mb L: 39/40 MS: 1 ShuffleBytes- 00:08:09.001 [2024-10-07 09:27:04.381396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.381424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.381526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.381544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.381645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.381662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.381769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.381785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.001 #49 NEW cov: 12415 ft: 14281 corp: 13/411b lim: 40 exec/s: 49 rss: 75Mb L: 37/40 MS: 1 EraseBytes- 00:08:09.001 [2024-10-07 09:27:04.451313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff30ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.451339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.451437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.451453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.451545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.451562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.001 #50 NEW cov: 12415 ft: 14327 corp: 14/438b lim: 40 exec/s: 50 rss: 75Mb L: 27/40 MS: 1 ChangeBit- 00:08:09.001 [2024-10-07 09:27:04.522428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.522457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.522553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff03 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.522570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.522663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.522678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.522777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002c0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.522795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.001 [2024-10-07 09:27:04.522897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.001 [2024-10-07 09:27:04.522914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.001 #51 NEW cov: 12415 ft: 14366 corp: 15/478b lim: 40 exec/s: 51 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:09.260 [2024-10-07 09:27:04.571359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff00ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.260 [2024-10-07 09:27:04.571386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.260 #52 NEW cov: 12415 ft: 14690 corp: 16/491b lim: 40 exec/s: 52 rss: 75Mb L: 13/40 MS: 1 EraseBytes- 00:08:09.260 [2024-10-07 09:27:04.642527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff30ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.260 [2024-10-07 09:27:04.642556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.260 [2024-10-07 09:27:04.642653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.260 [2024-10-07 09:27:04.642671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.642772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff29ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.642788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.261 #53 NEW cov: 12415 ft: 14706 corp: 17/518b lim: 40 exec/s: 53 rss: 75Mb L: 27/40 MS: 1 ChangeByte- 00:08:09.261 [2024-10-07 09:27:04.712052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.712078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.261 #54 NEW cov: 12415 ft: 14790 corp: 18/532b lim: 40 exec/s: 54 rss: 75Mb L: 14/40 MS: 1 EraseBytes- 00:08:09.261 [2024-10-07 09:27:04.763257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.763282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.763391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.763407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.763507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.763523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.763611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.763627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.261 #55 NEW cov: 12415 ft: 14853 corp: 19/564b lim: 40 exec/s: 55 rss: 75Mb L: 32/40 MS: 1 EraseBytes- 00:08:09.261 [2024-10-07 09:27:04.813668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.813695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.813790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ff30ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.813806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.813917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.813932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.261 [2024-10-07 09:27:04.814036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.261 [2024-10-07 09:27:04.814050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.521 #56 NEW cov: 12415 ft: 14880 corp: 20/599b lim: 40 exec/s: 56 rss: 75Mb L: 35/40 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:08:09.521 [2024-10-07 09:27:04.863011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:04.863039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.521 #57 NEW cov: 12415 ft: 14906 corp: 21/613b lim: 40 exec/s: 57 rss: 75Mb L: 14/40 MS: 1 EraseBytes- 00:08:09.521 [2024-10-07 09:27:04.913249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:04.913277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.521 #58 NEW cov: 12415 ft: 14915 corp: 22/621b lim: 40 exec/s: 58 rss: 75Mb L: 8/40 MS: 1 CrossOver- 00:08:09.521 [2024-10-07 09:27:04.974506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:04.974536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.521 [2024-10-07 09:27:04.974639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:04.974656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.521 [2024-10-07 09:27:04.974759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:04.974777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.521 [2024-10-07 09:27:04.974882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:04.974900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.521 #59 NEW cov: 12415 ft: 14916 corp: 23/660b lim: 40 exec/s: 59 rss: 75Mb L: 39/40 MS: 1 CrossOver- 00:08:09.521 [2024-10-07 09:27:05.054781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:05.054823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.521 [2024-10-07 09:27:05.054924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:05.054942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.521 [2024-10-07 09:27:05.055039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.521 [2024-10-07 09:27:05.055055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.521 #60 NEW cov: 12415 ft: 14985 corp: 24/686b lim: 40 exec/s: 60 rss: 75Mb L: 26/40 MS: 1 ShuffleBytes- 00:08:09.781 [2024-10-07 09:27:05.105251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.105282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.105380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ff30ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.105397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.105503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.105520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.105617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.105634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.781 #61 NEW cov: 12415 ft: 15000 corp: 25/721b lim: 40 exec/s: 61 rss: 75Mb L: 35/40 MS: 1 ChangeByte- 00:08:09.781 [2024-10-07 09:27:05.174470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.174505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.781 #62 NEW cov: 12415 ft: 15011 corp: 26/729b lim: 40 exec/s: 62 rss: 75Mb L: 8/40 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:08:09.781 [2024-10-07 09:27:05.245990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.246022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.246126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff03 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.246144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.246248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ce000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.246266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.246371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002c0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.246387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.246492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.246507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.781 #63 NEW cov: 12415 ft: 15029 corp: 27/769b lim: 40 exec/s: 63 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:08:09.781 [2024-10-07 09:27:05.296188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.296217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.296313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff03 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.296328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.296433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ce000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.296449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.296551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:002c0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.296568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.781 [2024-10-07 09:27:05.296667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.781 [2024-10-07 09:27:05.296681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:09.781 #64 NEW cov: 12415 ft: 15033 corp: 28/809b lim: 40 exec/s: 64 rss: 75Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:10.041 [2024-10-07 09:27:05.365938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff30ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.041 [2024-10-07 09:27:05.365964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.041 [2024-10-07 09:27:05.366066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.041 [2024-10-07 09:27:05.366082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.041 [2024-10-07 09:27:05.366174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fd29ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:10.041 [2024-10-07 09:27:05.366190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.041 #65 NEW cov: 12415 ft: 15081 corp: 29/836b lim: 40 exec/s: 32 rss: 75Mb L: 27/40 MS: 1 ChangeBit- 00:08:10.041 #65 DONE cov: 12415 ft: 15081 corp: 29/836b lim: 40 exec/s: 32 rss: 75Mb 00:08:10.041 ###### Recommended dictionary. ###### 00:08:10.041 "\377\377\377\377\377\377\377\000" # Uses: 1 00:08:10.041 ###### End of recommended dictionary. ###### 00:08:10.041 Done 65 runs in 2 second(s) 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:10.041 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:10.042 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:10.042 09:27:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:10.042 [2024-10-07 09:27:05.588308] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:10.042 [2024-10-07 09:27:05.588384] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492192 ] 00:08:10.611 [2024-10-07 09:27:05.867009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.611 [2024-10-07 09:27:05.959673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.611 [2024-10-07 09:27:06.018836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.611 [2024-10-07 09:27:06.035045] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:10.611 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.611 INFO: Seed: 3706968102 00:08:10.611 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:10.611 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:10.611 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:10.611 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.611 #2 INITED exec/s: 0 rss: 67Mb 00:08:10.611 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.611 This may also happen if the target rejected all inputs we tried so far 00:08:10.611 [2024-10-07 09:27:06.084323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.611 [2024-10-07 09:27:06.084351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.870 NEW_FUNC[1/715]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:10.870 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:10.870 #5 NEW cov: 12183 ft: 12173 corp: 2/10b lim: 35 exec/s: 0 rss: 74Mb L: 9/9 MS: 3 CopyPart-ShuffleBytes-CMP- DE: "@\000\000\000\000\000\000\000"- 00:08:10.870 [2024-10-07 09:27:06.415426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.870 [2024-10-07 09:27:06.415464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.130 NEW_FUNC[1/2]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:11.130 NEW_FUNC[2/2]: 0x1343908 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1766 00:08:11.130 #6 NEW cov: 12329 ft: 13238 corp: 3/26b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 CopyPart- 00:08:11.130 [2024-10-07 09:27:06.485534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.130 [2024-10-07 09:27:06.485565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.130 [2024-10-07 09:27:06.485641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.485659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.131 #9 NEW cov: 12342 ft: 13580 corp: 4/42b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 3 ChangeBinInt-ChangeBit-InsertRepeatedBytes- 00:08:11.131 [2024-10-07 09:27:06.525653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.525680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.131 #10 NEW cov: 12427 ft: 13860 corp: 5/58b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 ChangeBinInt- 00:08:11.131 [2024-10-07 09:27:06.585830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.585858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.131 [2024-10-07 09:27:06.585937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.585965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.131 #11 NEW cov: 12427 ft: 14058 corp: 6/74b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 CrossOver- 00:08:11.131 [2024-10-07 09:27:06.645994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.646022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.131 [2024-10-07 09:27:06.646099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.646116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.131 #17 NEW cov: 12427 ft: 14163 corp: 7/90b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 CopyPart- 00:08:11.131 [2024-10-07 09:27:06.686236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.686262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.131 [2024-10-07 09:27:06.686327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.686341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.131 [2024-10-07 09:27:06.686416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.131 [2024-10-07 09:27:06.686433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.391 #18 NEW cov: 12427 ft: 14440 corp: 8/114b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 PersAutoDict- DE: "@\000\000\000\000\000\000\000"- 00:08:11.391 [2024-10-07 09:27:06.746477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.746502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.391 [2024-10-07 09:27:06.746580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.746595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.391 #27 NEW cov: 12427 ft: 14520 corp: 9/137b lim: 35 exec/s: 0 rss: 75Mb L: 23/24 MS: 4 ShuffleBytes-InsertByte-ChangeBit-InsertRepeatedBytes- 00:08:11.391 [2024-10-07 09:27:06.786384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.786409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.391 [2024-10-07 09:27:06.786472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.786486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.391 #28 NEW cov: 12427 ft: 14593 corp: 10/155b lim: 35 exec/s: 0 rss: 75Mb L: 18/24 MS: 1 CopyPart- 00:08:11.391 [2024-10-07 09:27:06.826680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.826706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.391 [2024-10-07 09:27:06.826818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.826833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.391 NEW_FUNC[1/3]: 0x46c9d8 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:08:11.391 NEW_FUNC[2/3]: 0x1338168 in temp_threshold_opts_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1641 00:08:11.391 #29 NEW cov: 12481 ft: 14703 corp: 11/179b lim: 35 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\004"- 00:08:11.391 [2024-10-07 09:27:06.886861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.886886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.391 [2024-10-07 09:27:06.886947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.886962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.391 [2024-10-07 09:27:06.887024] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.887040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.391 #30 NEW cov: 12481 ft: 14721 corp: 12/204b lim: 35 exec/s: 0 rss: 75Mb L: 25/25 MS: 1 InsertByte- 00:08:11.391 [2024-10-07 09:27:06.947098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.947123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.391 [2024-10-07 09:27:06.947202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.391 [2024-10-07 09:27:06.947216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.650 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:11.650 #31 NEW cov: 12504 ft: 14820 corp: 13/227b lim: 35 exec/s: 0 rss: 75Mb L: 23/25 MS: 1 CrossOver- 00:08:11.650 [2024-10-07 09:27:07.007199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.650 [2024-10-07 09:27:07.007223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.650 [2024-10-07 09:27:07.007288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.007303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.007366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.007381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.651 #32 NEW cov: 12504 ft: 14844 corp: 14/253b lim: 35 exec/s: 0 rss: 75Mb L: 26/26 MS: 1 CMP- DE: "\001\020"- 00:08:11.651 [2024-10-07 09:27:07.067513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.067538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.067617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.067632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.067694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.067709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.067770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.067786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.651 #33 NEW cov: 12504 ft: 15160 corp: 15/285b lim: 35 exec/s: 33 rss: 75Mb L: 32/32 MS: 1 CrossOver- 00:08:11.651 [2024-10-07 09:27:07.127668] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.127694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.127757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.127777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.127837] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.127853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.127914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.127930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.651 #34 NEW cov: 12504 ft: 15173 corp: 16/317b lim: 35 exec/s: 34 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:11.651 [2024-10-07 09:27:07.187502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.187526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.651 [2024-10-07 09:27:07.187588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.651 [2024-10-07 09:27:07.187601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.651 #35 NEW cov: 12504 ft: 15217 corp: 17/334b lim: 35 exec/s: 35 rss: 76Mb L: 17/32 MS: 1 InsertByte- 00:08:11.911 [2024-10-07 09:27:07.227612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.227641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.227701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.227717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.911 #36 NEW cov: 12504 ft: 15239 corp: 18/350b lim: 35 exec/s: 36 rss: 76Mb L: 16/32 MS: 1 ShuffleBytes- 00:08:11.911 [2024-10-07 09:27:07.288106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.288131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.288211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.288226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.288286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.288302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.288365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.288381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.911 #37 NEW cov: 12504 ft: 15267 corp: 19/378b lim: 35 exec/s: 37 rss: 76Mb L: 28/32 MS: 1 InsertRepeatedBytes- 00:08:11.911 [2024-10-07 09:27:07.328137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.328163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.328297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.328312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.911 #38 NEW cov: 12504 ft: 15304 corp: 20/402b lim: 35 exec/s: 38 rss: 76Mb L: 24/32 MS: 1 ShuffleBytes- 00:08:11.911 [2024-10-07 09:27:07.368169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.368193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.368316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.368331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.911 #39 NEW cov: 12504 ft: 15332 corp: 21/426b lim: 35 exec/s: 39 rss: 76Mb L: 24/32 MS: 1 ChangeBit- 00:08:11.911 [2024-10-07 09:27:07.408472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.408497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.408577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.408592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.408651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.408667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.408729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.408744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.911 #40 NEW cov: 12504 ft: 15348 corp: 22/458b lim: 35 exec/s: 40 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:11.911 [2024-10-07 09:27:07.448243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.448268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.911 [2024-10-07 09:27:07.448346] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.911 [2024-10-07 09:27:07.448361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.170 #41 NEW cov: 12504 ft: 15412 corp: 23/475b lim: 35 exec/s: 41 rss: 76Mb L: 17/32 MS: 1 CopyPart- 00:08:12.170 [2024-10-07 09:27:07.508422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.508449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.170 [2024-10-07 09:27:07.508526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.508541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.170 #42 NEW cov: 12504 ft: 15442 corp: 24/491b lim: 35 exec/s: 42 rss: 76Mb L: 16/32 MS: 1 ChangeBit- 00:08:12.170 [2024-10-07 09:27:07.548690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.548715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.170 [2024-10-07 09:27:07.548818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.548832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.170 #43 NEW cov: 12504 ft: 15515 corp: 25/516b lim: 35 exec/s: 43 rss: 76Mb L: 25/32 MS: 1 InsertByte- 00:08:12.170 [2024-10-07 09:27:07.608720] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.608746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.170 #44 NEW cov: 12504 ft: 15561 corp: 26/532b lim: 35 exec/s: 44 rss: 76Mb L: 16/32 MS: 1 ChangeBit- 00:08:12.170 [2024-10-07 09:27:07.649191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.649218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.170 [2024-10-07 09:27:07.649280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.649294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.170 [2024-10-07 09:27:07.649352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.649369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.170 [2024-10-07 09:27:07.649426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.649441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.170 #45 NEW cov: 12504 ft: 15573 corp: 27/564b lim: 35 exec/s: 45 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:12.170 [2024-10-07 09:27:07.689135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.170 [2024-10-07 09:27:07.689160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.170 [2024-10-07 09:27:07.689259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.171 [2024-10-07 09:27:07.689273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.171 #46 NEW cov: 12504 ft: 15623 corp: 28/588b lim: 35 exec/s: 46 rss: 76Mb L: 24/32 MS: 1 CMP- DE: "\376\377\377\377\000\000\000\000"- 00:08:12.430 [2024-10-07 09:27:07.748987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000c0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.749019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.430 #47 NEW cov: 12504 ft: 15674 corp: 29/597b lim: 35 exec/s: 47 rss: 76Mb L: 9/32 MS: 1 ChangeBinInt- 00:08:12.430 [2024-10-07 09:27:07.789391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.789420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.789484] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.789498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.789556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.789572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.430 #48 NEW cov: 12504 ft: 15707 corp: 30/621b lim: 35 exec/s: 48 rss: 76Mb L: 24/32 MS: 1 EraseBytes- 00:08:12.430 [2024-10-07 09:27:07.849631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.849657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.849734] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.849749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.849807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.849829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.430 #49 NEW cov: 12504 ft: 15747 corp: 31/647b lim: 35 exec/s: 49 rss: 77Mb L: 26/32 MS: 1 ShuffleBytes- 00:08:12.430 [2024-10-07 09:27:07.889833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.889857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.889930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.889944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.890002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.890017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.890074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.890090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.430 #50 NEW cov: 12504 ft: 15763 corp: 32/679b lim: 35 exec/s: 50 rss: 77Mb L: 32/32 MS: 1 PersAutoDict- DE: "\001\020"- 00:08:12.430 [2024-10-07 09:27:07.950063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.950089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.950150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.950163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.950219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.950234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.430 [2024-10-07 09:27:07.950294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.950310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:12.430 #51 NEW cov: 12504 ft: 15772 corp: 33/711b lim: 35 exec/s: 51 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:08:12.430 [2024-10-07 09:27:07.989737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.430 [2024-10-07 09:27:07.989762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.690 #52 NEW cov: 12504 ft: 15778 corp: 34/728b lim: 35 exec/s: 52 rss: 77Mb L: 17/32 MS: 1 InsertByte- 00:08:12.690 [2024-10-07 09:27:08.030090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.690 [2024-10-07 09:27:08.030115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.690 [2024-10-07 09:27:08.030232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.690 [2024-10-07 09:27:08.030247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.690 #53 NEW cov: 12504 ft: 15783 corp: 35/752b lim: 35 exec/s: 53 rss: 77Mb L: 24/32 MS: 1 CopyPart- 00:08:12.690 [2024-10-07 09:27:08.070365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000013 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.690 [2024-10-07 09:27:08.070390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.690 [2024-10-07 09:27:08.070466] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.690 [2024-10-07 09:27:08.070480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.690 [2024-10-07 09:27:08.070539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.690 [2024-10-07 09:27:08.070555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.690 #54 NEW cov: 12504 ft: 15786 corp: 36/784b lim: 35 exec/s: 27 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:08:12.690 #54 DONE cov: 12504 ft: 15786 corp: 36/784b lim: 35 exec/s: 27 rss: 77Mb 00:08:12.690 ###### Recommended dictionary. ###### 00:08:12.690 "@\000\000\000\000\000\000\000" # Uses: 1 00:08:12.690 "\000\000\000\000\000\000\000\004" # Uses: 0 00:08:12.690 "\001\020" # Uses: 1 00:08:12.690 "\376\377\377\377\000\000\000\000" # Uses: 0 00:08:12.690 ###### End of recommended dictionary. ###### 00:08:12.690 Done 54 runs in 2 second(s) 00:08:12.690 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.953 09:27:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:12.953 [2024-10-07 09:27:08.302482] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:12.953 [2024-10-07 09:27:08.302562] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492618 ] 00:08:13.216 [2024-10-07 09:27:08.610885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.216 [2024-10-07 09:27:08.704851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.216 [2024-10-07 09:27:08.764019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.475 [2024-10-07 09:27:08.780235] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:13.475 INFO: Running with entropic power schedule (0xFF, 100). 00:08:13.475 INFO: Seed: 2158003889 00:08:13.475 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:13.475 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:13.475 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:13.475 INFO: A corpus is not provided, starting from an empty corpus 00:08:13.475 #2 INITED exec/s: 0 rss: 68Mb 00:08:13.475 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:13.475 This may also happen if the target rejected all inputs we tried so far 00:08:13.475 [2024-10-07 09:27:08.836088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.475 [2024-10-07 09:27:08.836120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.475 [2024-10-07 09:27:08.836179] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.475 [2024-10-07 09:27:08.836193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.475 [2024-10-07 09:27:08.836250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.475 [2024-10-07 09:27:08.836264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.734 NEW_FUNC[1/715]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:13.734 NEW_FUNC[2/715]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:13.734 #4 NEW cov: 12185 ft: 12179 corp: 2/29b lim: 35 exec/s: 0 rss: 74Mb L: 28/28 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:13.734 [2024-10-07 09:27:09.156821] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.734 [2024-10-07 09:27:09.156870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.734 #10 NEW cov: 12298 ft: 13332 corp: 3/43b lim: 35 exec/s: 0 rss: 74Mb L: 14/28 MS: 1 EraseBytes- 00:08:13.734 #11 NEW cov: 12304 ft: 13799 corp: 4/51b lim: 35 exec/s: 0 rss: 74Mb L: 8/28 MS: 1 EraseBytes- 00:08:13.734 [2024-10-07 09:27:09.277231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.734 [2024-10-07 09:27:09.277260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.734 [2024-10-07 09:27:09.277337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.734 [2024-10-07 09:27:09.277352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.734 [2024-10-07 09:27:09.277412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.735 [2024-10-07 09:27:09.277426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.994 #12 NEW cov: 12389 ft: 14120 corp: 5/84b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:13.994 [2024-10-07 09:27:09.317370] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.317397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.994 [2024-10-07 09:27:09.317456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.317470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.994 [2024-10-07 09:27:09.317528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.317542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.994 #13 NEW cov: 12389 ft: 14272 corp: 6/117b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:13.994 [2024-10-07 09:27:09.377522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.377547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.994 [2024-10-07 09:27:09.377624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.377639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.994 [2024-10-07 09:27:09.377699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.377713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.994 #14 NEW cov: 12389 ft: 14386 corp: 7/149b lim: 35 exec/s: 0 rss: 74Mb L: 32/33 MS: 1 EraseBytes- 00:08:13.994 [2024-10-07 09:27:09.437414] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.437439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.994 #15 NEW cov: 12389 ft: 14441 corp: 8/163b lim: 35 exec/s: 0 rss: 74Mb L: 14/33 MS: 1 CopyPart- 00:08:13.994 [2024-10-07 09:27:09.477802] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.477831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.994 [2024-10-07 09:27:09.477918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.994 [2024-10-07 09:27:09.477933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.994 [2024-10-07 09:27:09.477992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.995 [2024-10-07 09:27:09.478005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.995 #16 NEW cov: 12389 ft: 14478 corp: 9/196b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertByte- 00:08:13.995 [2024-10-07 09:27:09.537624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.995 [2024-10-07 09:27:09.537648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.995 [2024-10-07 09:27:09.537726] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.995 [2024-10-07 09:27:09.537741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.254 #17 NEW cov: 12389 ft: 14625 corp: 10/210b lim: 35 exec/s: 0 rss: 75Mb L: 14/33 MS: 1 ShuffleBytes- 00:08:14.254 #21 NEW cov: 12389 ft: 14689 corp: 11/220b lim: 35 exec/s: 0 rss: 75Mb L: 10/33 MS: 4 CrossOver-ChangeBinInt-ChangeBit-CopyPart- 00:08:14.254 [2024-10-07 09:27:09.617851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.254 [2024-10-07 09:27:09.617877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.254 [2024-10-07 09:27:09.617953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.254 [2024-10-07 09:27:09.617968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.254 #22 NEW cov: 12389 ft: 14728 corp: 12/234b lim: 35 exec/s: 0 rss: 75Mb L: 14/33 MS: 1 ChangeBinInt- 00:08:14.254 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:14.254 #23 NEW cov: 12412 ft: 14762 corp: 13/244b lim: 35 exec/s: 0 rss: 75Mb L: 10/33 MS: 1 CopyPart- 00:08:14.254 #24 NEW cov: 12412 ft: 14773 corp: 14/252b lim: 35 exec/s: 0 rss: 75Mb L: 8/33 MS: 1 ChangeBit- 00:08:14.254 [2024-10-07 09:27:09.798703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.254 [2024-10-07 09:27:09.798730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.254 [2024-10-07 09:27:09.798807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.254 [2024-10-07 09:27:09.798827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.254 [2024-10-07 09:27:09.798886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.254 [2024-10-07 09:27:09.798900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.513 #25 NEW cov: 12412 ft: 14817 corp: 15/280b lim: 35 exec/s: 25 rss: 75Mb L: 28/33 MS: 1 ChangeByte- 00:08:14.513 #26 NEW cov: 12412 ft: 14894 corp: 16/288b lim: 35 exec/s: 26 rss: 75Mb L: 8/33 MS: 1 ChangeBinInt- 00:08:14.514 [2024-10-07 09:27:09.898835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.898861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.514 [2024-10-07 09:27:09.898921] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.898936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.514 #28 NEW cov: 12412 ft: 14974 corp: 17/309b lim: 35 exec/s: 28 rss: 75Mb L: 21/33 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:14.514 [2024-10-07 09:27:09.939084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.939110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.514 [2024-10-07 09:27:09.939173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.939187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.514 [2024-10-07 09:27:09.939249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.939262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.514 #29 NEW cov: 12412 ft: 15092 corp: 18/337b lim: 35 exec/s: 29 rss: 75Mb L: 28/33 MS: 1 ChangeBinInt- 00:08:14.514 [2024-10-07 09:27:09.999279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.999304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.514 [2024-10-07 09:27:09.999380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.999394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.514 [2024-10-07 09:27:09.999453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:09.999467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.514 #30 NEW cov: 12412 ft: 15103 corp: 19/370b lim: 35 exec/s: 30 rss: 75Mb L: 33/33 MS: 1 ChangeBit- 00:08:14.514 [2024-10-07 09:27:10.059205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:10.059242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.514 [2024-10-07 09:27:10.059306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.514 [2024-10-07 09:27:10.059321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.773 #31 NEW cov: 12412 ft: 15122 corp: 20/385b lim: 35 exec/s: 31 rss: 75Mb L: 15/33 MS: 1 InsertByte- 00:08:14.773 [2024-10-07 09:27:10.119692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.119722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.773 [2024-10-07 09:27:10.119790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.119805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.773 [2024-10-07 09:27:10.119872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.119887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.773 #32 NEW cov: 12412 ft: 15140 corp: 21/418b lim: 35 exec/s: 32 rss: 75Mb L: 33/33 MS: 1 ChangeByte- 00:08:14.773 #33 NEW cov: 12412 ft: 15207 corp: 22/431b lim: 35 exec/s: 33 rss: 75Mb L: 13/33 MS: 1 CrossOver- 00:08:14.773 [2024-10-07 09:27:10.219758] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000004ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.219784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.773 [2024-10-07 09:27:10.219851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.219865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.773 #34 NEW cov: 12412 ft: 15230 corp: 23/453b lim: 35 exec/s: 34 rss: 75Mb L: 22/33 MS: 1 InsertByte- 00:08:14.773 [2024-10-07 09:27:10.279778] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.279803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.773 [2024-10-07 09:27:10.279887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.773 [2024-10-07 09:27:10.279903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.773 #35 NEW cov: 12412 ft: 15248 corp: 24/468b lim: 35 exec/s: 35 rss: 75Mb L: 15/33 MS: 1 ChangeBit- 00:08:15.033 #36 NEW cov: 12412 ft: 15272 corp: 25/476b lim: 35 exec/s: 36 rss: 76Mb L: 8/33 MS: 1 ChangeBit- 00:08:15.033 [2024-10-07 09:27:10.400113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.400139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.033 [2024-10-07 09:27:10.400217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.400232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.033 #37 NEW cov: 12412 ft: 15281 corp: 26/490b lim: 35 exec/s: 37 rss: 76Mb L: 14/33 MS: 1 ChangeBinInt- 00:08:15.033 #38 NEW cov: 12412 ft: 15282 corp: 27/498b lim: 35 exec/s: 38 rss: 76Mb L: 8/33 MS: 1 ShuffleBytes- 00:08:15.033 [2024-10-07 09:27:10.500702] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.500730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.033 [2024-10-07 09:27:10.500811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.500833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.033 [2024-10-07 09:27:10.500895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.500912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.033 [2024-10-07 09:27:10.500974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.500988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.033 #39 NEW cov: 12412 ft: 15439 corp: 28/531b lim: 35 exec/s: 39 rss: 76Mb L: 33/33 MS: 1 ChangeByte- 00:08:15.033 [2024-10-07 09:27:10.540874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.540900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.033 [2024-10-07 09:27:10.540978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.033 [2024-10-07 09:27:10.540992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.033 NEW_FUNC[1/2]: 0x4705d8 in feat_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:332 00:08:15.033 NEW_FUNC[2/2]: 0x1332608 in nvmf_ctrlr_get_features_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1707 00:08:15.033 #40 NEW cov: 12461 ft: 15557 corp: 29/564b lim: 35 exec/s: 40 rss: 76Mb L: 33/33 MS: 1 CrossOver- 00:08:15.033 [2024-10-07 09:27:10.590808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.034 [2024-10-07 09:27:10.590838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.034 [2024-10-07 09:27:10.590917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.034 [2024-10-07 09:27:10.590932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.293 #41 NEW cov: 12461 ft: 15560 corp: 30/586b lim: 35 exec/s: 41 rss: 76Mb L: 22/33 MS: 1 CrossOver- 00:08:15.293 [2024-10-07 09:27:10.650798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.293 [2024-10-07 09:27:10.650829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.293 [2024-10-07 09:27:10.650893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.293 [2024-10-07 09:27:10.650908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.293 #42 NEW cov: 12461 ft: 15570 corp: 31/602b lim: 35 exec/s: 42 rss: 76Mb L: 16/33 MS: 1 InsertByte- 00:08:15.293 [2024-10-07 09:27:10.711034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.293 [2024-10-07 09:27:10.711059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.293 #43 NEW cov: 12461 ft: 15597 corp: 32/616b lim: 35 exec/s: 43 rss: 76Mb L: 14/33 MS: 1 ChangeByte- 00:08:15.293 [2024-10-07 09:27:10.751127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.293 [2024-10-07 09:27:10.751152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.293 [2024-10-07 09:27:10.751214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.293 [2024-10-07 09:27:10.751232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.293 #44 NEW cov: 12461 ft: 15604 corp: 33/630b lim: 35 exec/s: 44 rss: 76Mb L: 14/33 MS: 1 CopyPart- 00:08:15.293 #45 NEW cov: 12461 ft: 15609 corp: 34/638b lim: 35 exec/s: 22 rss: 76Mb L: 8/33 MS: 1 CopyPart- 00:08:15.293 #45 DONE cov: 12461 ft: 15609 corp: 34/638b lim: 35 exec/s: 22 rss: 76Mb 00:08:15.293 Done 45 runs in 2 second(s) 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:15.553 09:27:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:15.553 [2024-10-07 09:27:11.020664] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:15.553 [2024-10-07 09:27:11.020747] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492980 ] 00:08:15.813 [2024-10-07 09:27:11.344711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.072 [2024-10-07 09:27:11.438780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.072 [2024-10-07 09:27:11.498045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.072 [2024-10-07 09:27:11.514251] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:16.072 INFO: Running with entropic power schedule (0xFF, 100). 00:08:16.072 INFO: Seed: 598023381 00:08:16.072 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:16.072 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:16.072 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:16.072 INFO: A corpus is not provided, starting from an empty corpus 00:08:16.072 #2 INITED exec/s: 0 rss: 67Mb 00:08:16.072 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:16.072 This may also happen if the target rejected all inputs we tried so far 00:08:16.072 [2024-10-07 09:27:11.569506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.072 [2024-10-07 09:27:11.569539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.331 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:16.332 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:16.332 #6 NEW cov: 12274 ft: 12265 corp: 2/32b lim: 105 exec/s: 0 rss: 74Mb L: 31/31 MS: 4 CopyPart-CrossOver-InsertByte-InsertRepeatedBytes- 00:08:16.332 [2024-10-07 09:27:11.890422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.332 [2024-10-07 09:27:11.890474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.591 #8 NEW cov: 12388 ft: 12918 corp: 3/55b lim: 105 exec/s: 0 rss: 74Mb L: 23/31 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:16.591 [2024-10-07 09:27:11.930401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72057594037010432 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:11.930430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.591 #9 NEW cov: 12394 ft: 13122 corp: 4/86b lim: 105 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 CMP- DE: "\362\000\000\000"- 00:08:16.591 [2024-10-07 09:27:11.990950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924837434693 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:11.990980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.591 [2024-10-07 09:27:11.991045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:11.991061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.591 [2024-10-07 09:27:11.991118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:11.991134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.591 [2024-10-07 09:27:11.991189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:11.991205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.591 #16 NEW cov: 12479 ft: 13957 corp: 5/170b lim: 105 exec/s: 0 rss: 74Mb L: 84/84 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:16.591 [2024-10-07 09:27:12.030696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:12.030723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.591 #17 NEW cov: 12479 ft: 14159 corp: 6/201b lim: 105 exec/s: 0 rss: 74Mb L: 31/84 MS: 1 ChangeByte- 00:08:16.591 [2024-10-07 09:27:12.070782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:12.070808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.591 #18 NEW cov: 12479 ft: 14229 corp: 7/225b lim: 105 exec/s: 0 rss: 74Mb L: 24/84 MS: 1 InsertByte- 00:08:16.591 [2024-10-07 09:27:12.131318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924835337541 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:12.131345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.591 [2024-10-07 09:27:12.131417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:12.131433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.591 [2024-10-07 09:27:12.131487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:12.131503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.591 [2024-10-07 09:27:12.131558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.591 [2024-10-07 09:27:12.131573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:16.850 #19 NEW cov: 12479 ft: 14347 corp: 8/310b lim: 105 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 InsertByte- 00:08:16.850 [2024-10-07 09:27:12.191140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:15276209936040722431 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.191167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.850 #20 NEW cov: 12479 ft: 14435 corp: 9/334b lim: 105 exec/s: 0 rss: 74Mb L: 24/85 MS: 1 ChangeByte- 00:08:16.850 [2024-10-07 09:27:12.251286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.251313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.850 #21 NEW cov: 12479 ft: 14488 corp: 10/358b lim: 105 exec/s: 0 rss: 74Mb L: 24/85 MS: 1 CopyPart- 00:08:16.850 [2024-10-07 09:27:12.291366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.291393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.850 #22 NEW cov: 12479 ft: 14541 corp: 11/393b lim: 105 exec/s: 0 rss: 74Mb L: 35/85 MS: 1 PersAutoDict- DE: "\362\000\000\000"- 00:08:16.850 [2024-10-07 09:27:12.331524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.331550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.850 #23 NEW cov: 12479 ft: 14558 corp: 12/417b lim: 105 exec/s: 0 rss: 74Mb L: 24/85 MS: 1 ShuffleBytes- 00:08:16.850 [2024-10-07 09:27:12.371990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.372017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.850 [2024-10-07 09:27:12.372092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316422312124255 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.372107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.850 [2024-10-07 09:27:12.372166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.372181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:16.850 [2024-10-07 09:27:12.372236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.850 [2024-10-07 09:27:12.372252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.109 #24 NEW cov: 12479 ft: 14608 corp: 13/515b lim: 105 exec/s: 0 rss: 75Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:08:17.109 [2024-10-07 09:27:12.431817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.431845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:17.109 #25 NEW cov: 12502 ft: 14770 corp: 14/546b lim: 105 exec/s: 0 rss: 75Mb L: 31/98 MS: 1 ChangeBit- 00:08:17.109 [2024-10-07 09:27:12.492343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.492372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.492426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.492441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.492495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.492512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.492565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18442803428330569727 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.492579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.109 #26 NEW cov: 12502 ft: 14799 corp: 15/631b lim: 105 exec/s: 0 rss: 75Mb L: 85/98 MS: 1 InsertRepeatedBytes- 00:08:17.109 [2024-10-07 09:27:12.552159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.552188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 #27 NEW cov: 12502 ft: 14814 corp: 16/666b lim: 105 exec/s: 27 rss: 75Mb L: 35/98 MS: 1 CrossOver- 00:08:17.109 [2024-10-07 09:27:12.592622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924835337541 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.592650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.592703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.592719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.592772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.592793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.592867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.592884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.109 #28 NEW cov: 12502 ft: 14872 corp: 17/751b lim: 105 exec/s: 28 rss: 75Mb L: 85/98 MS: 1 CopyPart- 00:08:17.109 [2024-10-07 09:27:12.652661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4278190080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.652688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.652743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.652759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.109 [2024-10-07 09:27:12.652820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1095216660480 len:54272 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.109 [2024-10-07 09:27:12.652835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.368 #29 NEW cov: 12502 ft: 15174 corp: 18/823b lim: 105 exec/s: 29 rss: 75Mb L: 72/98 MS: 1 InsertRepeatedBytes- 00:08:17.368 [2024-10-07 09:27:12.712940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924835337541 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.712967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 [2024-10-07 09:27:12.713041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.713058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.368 [2024-10-07 09:27:12.713111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.713127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.368 [2024-10-07 09:27:12.713181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.713198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.368 #30 NEW cov: 12502 ft: 15181 corp: 19/910b lim: 105 exec/s: 30 rss: 75Mb L: 87/98 MS: 1 CopyPart- 00:08:17.368 [2024-10-07 09:27:12.752793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:15276209936040722431 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.752826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 [2024-10-07 09:27:12.752889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.752906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.368 #31 NEW cov: 12502 ft: 15472 corp: 20/960b lim: 105 exec/s: 31 rss: 75Mb L: 50/98 MS: 1 InsertRepeatedBytes- 00:08:17.368 [2024-10-07 09:27:12.793191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.368 [2024-10-07 09:27:12.793218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 [2024-10-07 09:27:12.793275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.793290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.793346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.793361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.793413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.793429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.369 #32 NEW cov: 12502 ft: 15548 corp: 21/1053b lim: 105 exec/s: 32 rss: 75Mb L: 93/98 MS: 1 InsertRepeatedBytes- 00:08:17.369 [2024-10-07 09:27:12.853325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924835337541 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.853354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.853425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.853442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.853498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.853513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.853567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.853583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.369 #33 NEW cov: 12502 ft: 15558 corp: 22/1145b lim: 105 exec/s: 33 rss: 75Mb L: 92/98 MS: 1 InsertRepeatedBytes- 00:08:17.369 [2024-10-07 09:27:12.913495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.913523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.913595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.913611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.913665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.913679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.369 [2024-10-07 09:27:12.913735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.369 [2024-10-07 09:27:12.913749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.628 #34 NEW cov: 12502 ft: 15564 corp: 23/1238b lim: 105 exec/s: 34 rss: 75Mb L: 93/98 MS: 1 CopyPart- 00:08:17.628 [2024-10-07 09:27:12.973297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:12.973325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.628 #35 NEW cov: 12502 ft: 15592 corp: 24/1273b lim: 105 exec/s: 35 rss: 75Mb L: 35/98 MS: 1 ChangeASCIIInt- 00:08:17.628 [2024-10-07 09:27:13.013387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.013416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.628 #36 NEW cov: 12502 ft: 15597 corp: 25/1304b lim: 105 exec/s: 36 rss: 75Mb L: 31/98 MS: 1 CopyPart- 00:08:17.628 [2024-10-07 09:27:13.053917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924835337541 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.053947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.054002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.054018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.054070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.054089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.054144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.054159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.628 #37 NEW cov: 12502 ft: 15637 corp: 26/1389b lim: 105 exec/s: 37 rss: 75Mb L: 85/98 MS: 1 CopyPart- 00:08:17.628 [2024-10-07 09:27:13.094012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.094040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.094114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.094130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.094184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.094200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.094256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.094275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.628 #38 NEW cov: 12502 ft: 15649 corp: 27/1482b lim: 105 exec/s: 38 rss: 75Mb L: 93/98 MS: 1 CrossOver- 00:08:17.628 [2024-10-07 09:27:13.154143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.154170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.154228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.154242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.154298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.154313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.628 [2024-10-07 09:27:13.154368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.628 [2024-10-07 09:27:13.154383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.628 #39 NEW cov: 12502 ft: 15658 corp: 28/1575b lim: 105 exec/s: 39 rss: 75Mb L: 93/98 MS: 1 ChangeByte- 00:08:17.887 [2024-10-07 09:27:13.194305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.887 [2024-10-07 09:27:13.194336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.887 [2024-10-07 09:27:13.194377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.887 [2024-10-07 09:27:13.194393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.887 [2024-10-07 09:27:13.194447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.887 [2024-10-07 09:27:13.194463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.887 [2024-10-07 09:27:13.194520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.887 [2024-10-07 09:27:13.194536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.887 #40 NEW cov: 12502 ft: 15707 corp: 29/1668b lim: 105 exec/s: 40 rss: 75Mb L: 93/98 MS: 1 ChangeASCIIInt- 00:08:17.887 [2024-10-07 09:27:13.254076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.887 [2024-10-07 09:27:13.254103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.887 #41 NEW cov: 12502 ft: 15727 corp: 30/1692b lim: 105 exec/s: 41 rss: 75Mb L: 24/98 MS: 1 ChangeBit- 00:08:17.887 [2024-10-07 09:27:13.294299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.887 [2024-10-07 09:27:13.294327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.887 [2024-10-07 09:27:13.294365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446473641093758975 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.294384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.888 #42 NEW cov: 12502 ft: 15744 corp: 31/1754b lim: 105 exec/s: 42 rss: 75Mb L: 62/98 MS: 1 CopyPart- 00:08:17.888 [2024-10-07 09:27:13.354740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.354767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.888 [2024-10-07 09:27:13.354842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.354858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.888 [2024-10-07 09:27:13.354914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.354929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.888 [2024-10-07 09:27:13.354994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.355010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.888 #43 NEW cov: 12502 ft: 15776 corp: 32/1847b lim: 105 exec/s: 43 rss: 76Mb L: 93/98 MS: 1 PersAutoDict- DE: "\362\000\000\000"- 00:08:17.888 [2024-10-07 09:27:13.414898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.414925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.888 [2024-10-07 09:27:13.414998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991472724691207493 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.415014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.888 [2024-10-07 09:27:13.415069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.415085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:17.888 [2024-10-07 09:27:13.415140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.888 [2024-10-07 09:27:13.415156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:17.888 #44 NEW cov: 12502 ft: 15791 corp: 33/1947b lim: 105 exec/s: 44 rss: 76Mb L: 100/100 MS: 1 CrossOver- 00:08:18.147 [2024-10-07 09:27:13.455033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.455062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.147 [2024-10-07 09:27:13.455134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316422312124255 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.455150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.147 [2024-10-07 09:27:13.455209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.455226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.147 [2024-10-07 09:27:13.455281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.455297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.147 #45 NEW cov: 12502 ft: 15822 corp: 34/2036b lim: 105 exec/s: 45 rss: 76Mb L: 89/100 MS: 1 EraseBytes- 00:08:18.147 [2024-10-07 09:27:13.514820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446743369334915071 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.514848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.147 #46 NEW cov: 12502 ft: 15829 corp: 35/2067b lim: 105 exec/s: 46 rss: 76Mb L: 31/100 MS: 1 ChangeByte- 00:08:18.147 [2024-10-07 09:27:13.555275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4991471924835337541 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.555301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.147 [2024-10-07 09:27:13.555377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.555391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.147 [2024-10-07 09:27:13.555447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.555464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.147 [2024-10-07 09:27:13.555519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:4991471925827290437 len:17734 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.147 [2024-10-07 09:27:13.555534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.147 #47 NEW cov: 12502 ft: 15846 corp: 36/2152b lim: 105 exec/s: 23 rss: 76Mb L: 85/100 MS: 1 ShuffleBytes- 00:08:18.147 #47 DONE cov: 12502 ft: 15846 corp: 36/2152b lim: 105 exec/s: 23 rss: 76Mb 00:08:18.147 ###### Recommended dictionary. ###### 00:08:18.147 "\362\000\000\000" # Uses: 2 00:08:18.147 ###### End of recommended dictionary. ###### 00:08:18.148 Done 47 runs in 2 second(s) 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:18.407 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:18.408 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:18.408 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:18.408 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:18.408 09:27:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:18.408 [2024-10-07 09:27:13.774745] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:18.408 [2024-10-07 09:27:13.774835] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493334 ] 00:08:18.667 [2024-10-07 09:27:14.089355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.667 [2024-10-07 09:27:14.180150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.926 [2024-10-07 09:27:14.239296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.926 [2024-10-07 09:27:14.255503] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:18.926 INFO: Running with entropic power schedule (0xFF, 100). 00:08:18.926 INFO: Seed: 3339019026 00:08:18.926 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:18.926 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:18.926 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:18.926 INFO: A corpus is not provided, starting from an empty corpus 00:08:18.926 #2 INITED exec/s: 0 rss: 67Mb 00:08:18.926 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:18.926 This may also happen if the target rejected all inputs we tried so far 00:08:18.926 [2024-10-07 09:27:14.311291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.926 [2024-10-07 09:27:14.311323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.926 [2024-10-07 09:27:14.311364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.926 [2024-10-07 09:27:14.311381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.926 [2024-10-07 09:27:14.311435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.926 [2024-10-07 09:27:14.311451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.926 [2024-10-07 09:27:14.311506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.926 [2024-10-07 09:27:14.311521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.185 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:19.185 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:19.185 #13 NEW cov: 12293 ft: 12292 corp: 2/101b lim: 120 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:08:19.185 [2024-10-07 09:27:14.652178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.185 [2024-10-07 09:27:14.652217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.185 [2024-10-07 09:27:14.652268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.185 [2024-10-07 09:27:14.652284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.185 [2024-10-07 09:27:14.652340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.185 [2024-10-07 09:27:14.652354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.185 [2024-10-07 09:27:14.652407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:2954361355555045375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.185 [2024-10-07 09:27:14.652422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.185 #14 NEW cov: 12409 ft: 12734 corp: 3/201b lim: 120 exec/s: 0 rss: 74Mb L: 100/100 MS: 1 ChangeByte- 00:08:19.185 [2024-10-07 09:27:14.712297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.185 [2024-10-07 09:27:14.712328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.186 [2024-10-07 09:27:14.712391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.186 [2024-10-07 09:27:14.712409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.186 [2024-10-07 09:27:14.712465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.186 [2024-10-07 09:27:14.712481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.186 [2024-10-07 09:27:14.712535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.186 [2024-10-07 09:27:14.712552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.445 #15 NEW cov: 12415 ft: 12988 corp: 4/307b lim: 120 exec/s: 0 rss: 74Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:08:19.445 [2024-10-07 09:27:14.772418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.772449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.772489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.772503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.772557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.772576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.772629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.772644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.445 #16 NEW cov: 12500 ft: 13469 corp: 5/413b lim: 120 exec/s: 0 rss: 74Mb L: 106/106 MS: 1 ChangeBit- 00:08:19.445 [2024-10-07 09:27:14.832547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.832574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.832619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.832635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.832686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.832702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.832756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:281474976647424 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.832772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.445 #27 NEW cov: 12500 ft: 13604 corp: 6/519b lim: 120 exec/s: 0 rss: 74Mb L: 106/106 MS: 1 ChangeBinInt- 00:08:19.445 [2024-10-07 09:27:14.872655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.872683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.872751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.872767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.872828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.872843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.872909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18386226953716760575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.872925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.445 #28 NEW cov: 12500 ft: 13704 corp: 7/620b lim: 120 exec/s: 0 rss: 74Mb L: 101/106 MS: 1 InsertByte- 00:08:19.445 [2024-10-07 09:27:14.912765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.912795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.912858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.912876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.912930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.912947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.913003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.913019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.445 #29 NEW cov: 12500 ft: 13780 corp: 8/721b lim: 120 exec/s: 0 rss: 74Mb L: 101/106 MS: 1 CrossOver- 00:08:19.445 [2024-10-07 09:27:14.952930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.952957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.953026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.953042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.953094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.953110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.445 [2024-10-07 09:27:14.953165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.445 [2024-10-07 09:27:14.953182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.445 #30 NEW cov: 12500 ft: 13831 corp: 9/827b lim: 120 exec/s: 0 rss: 74Mb L: 106/106 MS: 1 ChangeBit- 00:08:19.446 [2024-10-07 09:27:14.993032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.446 [2024-10-07 09:27:14.993060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.446 [2024-10-07 09:27:14.993125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5836665117072162815 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.446 [2024-10-07 09:27:14.993143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.446 [2024-10-07 09:27:14.993196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.446 [2024-10-07 09:27:14.993210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.446 [2024-10-07 09:27:14.993264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.446 [2024-10-07 09:27:14.993280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.705 #31 NEW cov: 12500 ft: 13905 corp: 10/933b lim: 120 exec/s: 0 rss: 74Mb L: 106/106 MS: 1 CopyPart- 00:08:19.705 [2024-10-07 09:27:15.053183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.705 [2024-10-07 09:27:15.053209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.053281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.053297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.053351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.053368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.053422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069515247615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.053437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.706 #32 NEW cov: 12500 ft: 13980 corp: 11/1039b lim: 120 exec/s: 0 rss: 74Mb L: 106/106 MS: 1 ChangeBinInt- 00:08:19.706 [2024-10-07 09:27:15.113347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.113376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.113426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.113442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.113495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.113510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.113565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.113581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.706 #33 NEW cov: 12500 ft: 14017 corp: 12/1146b lim: 120 exec/s: 0 rss: 74Mb L: 107/107 MS: 1 InsertByte- 00:08:19.706 [2024-10-07 09:27:15.153429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.153457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.153511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18397292786631049215 len:20736 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.153527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.153580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.153596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.153651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:1099511627529 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.153670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.706 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:19.706 #34 NEW cov: 12523 ft: 14061 corp: 13/1253b lim: 120 exec/s: 0 rss: 75Mb L: 107/107 MS: 1 InsertByte- 00:08:19.706 [2024-10-07 09:27:15.213631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.213660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.213708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5836665114124636240 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.213723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.213795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.213817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.213873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18386226953716760575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.213889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.706 #35 NEW cov: 12523 ft: 14104 corp: 14/1354b lim: 120 exec/s: 0 rss: 75Mb L: 101/107 MS: 1 EraseBytes- 00:08:19.706 [2024-10-07 09:27:15.253724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.253753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.253806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.253825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.253895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.253910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.706 [2024-10-07 09:27:15.253966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:5787406993200271440 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.706 [2024-10-07 09:27:15.253981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.965 #36 NEW cov: 12523 ft: 14128 corp: 15/1460b lim: 120 exec/s: 0 rss: 75Mb L: 106/107 MS: 1 CopyPart- 00:08:19.965 [2024-10-07 09:27:15.293846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.965 [2024-10-07 09:27:15.293875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.965 [2024-10-07 09:27:15.293946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.965 [2024-10-07 09:27:15.293963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.965 [2024-10-07 09:27:15.294022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.965 [2024-10-07 09:27:15.294037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.965 [2024-10-07 09:27:15.294093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.965 [2024-10-07 09:27:15.294109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.965 #37 NEW cov: 12523 ft: 14161 corp: 16/1570b lim: 120 exec/s: 37 rss: 75Mb L: 110/110 MS: 1 InsertRepeatedBytes- 00:08:19.966 [2024-10-07 09:27:15.354018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.354046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.354093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5836665117072162807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.354109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.354162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.354179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.354232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.354248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.966 #38 NEW cov: 12523 ft: 14227 corp: 17/1676b lim: 120 exec/s: 38 rss: 75Mb L: 106/110 MS: 1 ChangeBit- 00:08:19.966 [2024-10-07 09:27:15.414167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.414194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.414248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.414264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.414317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.414333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.414385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.414401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.966 #39 NEW cov: 12523 ft: 14244 corp: 18/1786b lim: 120 exec/s: 39 rss: 75Mb L: 110/110 MS: 1 ChangeByte- 00:08:19.966 [2024-10-07 09:27:15.474349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.474379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.474418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.474434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.474486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.474518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.474575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.474592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.966 #40 NEW cov: 12523 ft: 14275 corp: 19/1904b lim: 120 exec/s: 40 rss: 75Mb L: 118/118 MS: 1 CopyPart- 00:08:19.966 [2024-10-07 09:27:15.514541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.514570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.514620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.514636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.514689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.514705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.966 [2024-10-07 09:27:15.514760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.966 [2024-10-07 09:27:15.514776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.226 #41 NEW cov: 12523 ft: 14299 corp: 20/2010b lim: 120 exec/s: 41 rss: 75Mb L: 106/118 MS: 1 ChangeBit- 00:08:20.226 [2024-10-07 09:27:15.554705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.554733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.554805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.554828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.554884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.554902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.554957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.554973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.226 #42 NEW cov: 12523 ft: 14312 corp: 21/2116b lim: 120 exec/s: 42 rss: 75Mb L: 106/118 MS: 1 ShuffleBytes- 00:08:20.226 [2024-10-07 09:27:15.594746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.594773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.594844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.594862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.594926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.594942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.594998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.595014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.226 #43 NEW cov: 12523 ft: 14346 corp: 22/2235b lim: 120 exec/s: 43 rss: 75Mb L: 119/119 MS: 1 InsertByte- 00:08:20.226 [2024-10-07 09:27:15.654858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.654886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.654957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.654974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.655028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.655043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.655099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.655115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.226 #44 NEW cov: 12523 ft: 14355 corp: 23/2342b lim: 120 exec/s: 44 rss: 75Mb L: 107/119 MS: 1 ChangeBit- 00:08:20.226 [2024-10-07 09:27:15.694990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.695017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.695086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.695103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.695157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.695176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.695229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.695246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.226 #45 NEW cov: 12523 ft: 14376 corp: 24/2449b lim: 120 exec/s: 45 rss: 75Mb L: 107/119 MS: 1 ShuffleBytes- 00:08:20.226 [2024-10-07 09:27:15.734973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.735004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.735044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.735058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.226 [2024-10-07 09:27:15.735112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.226 [2024-10-07 09:27:15.735145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.226 #46 NEW cov: 12523 ft: 14791 corp: 25/2522b lim: 120 exec/s: 46 rss: 75Mb L: 73/119 MS: 1 EraseBytes- 00:08:20.485 [2024-10-07 09:27:15.795125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.485 [2024-10-07 09:27:15.795155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.485 [2024-10-07 09:27:15.795212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5836665117072162815 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.485 [2024-10-07 09:27:15.795228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.485 [2024-10-07 09:27:15.795281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551400 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.485 [2024-10-07 09:27:15.795297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.486 #47 NEW cov: 12523 ft: 14863 corp: 26/2597b lim: 120 exec/s: 47 rss: 75Mb L: 75/119 MS: 1 EraseBytes- 00:08:20.486 [2024-10-07 09:27:15.834947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:792633530306789375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.834975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.486 #48 NEW cov: 12523 ft: 15702 corp: 27/2627b lim: 120 exec/s: 48 rss: 75Mb L: 30/119 MS: 1 CrossOver- 00:08:20.486 [2024-10-07 09:27:15.895521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.895550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:15.895602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.895618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:15.895673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.895694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:15.895746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.895762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.486 #49 NEW cov: 12523 ft: 15727 corp: 28/2733b lim: 120 exec/s: 49 rss: 75Mb L: 106/119 MS: 1 ChangeByte- 00:08:20.486 [2024-10-07 09:27:15.955761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.955790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:15.955859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.955877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:15.955932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.955947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:15.956000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:15.956016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.486 #50 NEW cov: 12523 ft: 15763 corp: 29/2839b lim: 120 exec/s: 50 rss: 75Mb L: 106/119 MS: 1 ShuffleBytes- 00:08:20.486 [2024-10-07 09:27:16.015927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:16.015966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:16.016040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:16.016056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:16.016111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:16.016126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.486 [2024-10-07 09:27:16.016183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.486 [2024-10-07 09:27:16.016198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.486 #51 NEW cov: 12523 ft: 15784 corp: 30/2953b lim: 120 exec/s: 51 rss: 75Mb L: 114/119 MS: 1 InsertRepeatedBytes- 00:08:20.745 [2024-10-07 09:27:16.055989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.056019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.056078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5836665114124636240 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.056095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.056151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.056166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.056221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18386226953716760575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.056235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.745 #52 NEW cov: 12523 ft: 15817 corp: 31/3054b lim: 120 exec/s: 52 rss: 75Mb L: 101/119 MS: 1 ChangeByte- 00:08:20.745 [2024-10-07 09:27:16.116233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.116261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.116331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5836665117072162807 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.116348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.116402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.116417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.116473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.116489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.745 #53 NEW cov: 12523 ft: 15835 corp: 32/3160b lim: 120 exec/s: 53 rss: 76Mb L: 106/119 MS: 1 ShuffleBytes- 00:08:20.745 [2024-10-07 09:27:16.176220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.176246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.176300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.176317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.745 [2024-10-07 09:27:16.176371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.176385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.745 #54 NEW cov: 12523 ft: 15850 corp: 33/3244b lim: 120 exec/s: 54 rss: 76Mb L: 84/119 MS: 1 EraseBytes- 00:08:20.745 [2024-10-07 09:27:16.216404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.745 [2024-10-07 09:27:16.216432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.746 [2024-10-07 09:27:16.216482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5787213829993660415 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.746 [2024-10-07 09:27:16.216498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.746 [2024-10-07 09:27:16.216548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.746 [2024-10-07 09:27:16.216564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.746 [2024-10-07 09:27:16.216618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:10496 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.746 [2024-10-07 09:27:16.216633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:20.746 #55 NEW cov: 12523 ft: 15866 corp: 34/3350b lim: 120 exec/s: 55 rss: 76Mb L: 106/119 MS: 1 ShuffleBytes- 00:08:20.746 [2024-10-07 09:27:16.256389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.746 [2024-10-07 09:27:16.256418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.746 [2024-10-07 09:27:16.256454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.746 [2024-10-07 09:27:16.256470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.746 [2024-10-07 09:27:16.256527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5787213829993660240 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.746 [2024-10-07 09:27:16.256542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:20.746 #56 NEW cov: 12523 ft: 15902 corp: 35/3435b lim: 120 exec/s: 28 rss: 76Mb L: 85/119 MS: 1 EraseBytes- 00:08:20.746 #56 DONE cov: 12523 ft: 15902 corp: 35/3435b lim: 120 exec/s: 28 rss: 76Mb 00:08:20.746 Done 56 runs in 2 second(s) 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:21.005 09:27:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:21.005 [2024-10-07 09:27:16.480303] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:21.005 [2024-10-07 09:27:16.480380] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493700 ] 00:08:21.265 [2024-10-07 09:27:16.786013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.525 [2024-10-07 09:27:16.881273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.525 [2024-10-07 09:27:16.940218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.525 [2024-10-07 09:27:16.956435] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:21.525 INFO: Running with entropic power schedule (0xFF, 100). 00:08:21.525 INFO: Seed: 1745077306 00:08:21.525 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:21.525 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:21.525 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:21.525 INFO: A corpus is not provided, starting from an empty corpus 00:08:21.525 #2 INITED exec/s: 0 rss: 67Mb 00:08:21.525 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:21.525 This may also happen if the target rejected all inputs we tried so far 00:08:21.525 [2024-10-07 09:27:17.011957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.525 [2024-10-07 09:27:17.011988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.525 [2024-10-07 09:27:17.012038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.525 [2024-10-07 09:27:17.012053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.525 [2024-10-07 09:27:17.012103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:21.525 [2024-10-07 09:27:17.012118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.783 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:21.783 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:21.783 #11 NEW cov: 12239 ft: 12232 corp: 2/64b lim: 100 exec/s: 0 rss: 74Mb L: 63/63 MS: 4 CrossOver-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:08:21.783 [2024-10-07 09:27:17.342735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:21.783 [2024-10-07 09:27:17.342774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.783 [2024-10-07 09:27:17.342827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:21.783 [2024-10-07 09:27:17.342842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.783 [2024-10-07 09:27:17.342891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:21.783 [2024-10-07 09:27:17.342908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.043 #12 NEW cov: 12352 ft: 12783 corp: 3/127b lim: 100 exec/s: 0 rss: 74Mb L: 63/63 MS: 1 CopyPart- 00:08:22.043 [2024-10-07 09:27:17.402765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.043 [2024-10-07 09:27:17.402793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.043 [2024-10-07 09:27:17.402860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.043 [2024-10-07 09:27:17.402877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.043 #13 NEW cov: 12358 ft: 13242 corp: 4/171b lim: 100 exec/s: 0 rss: 74Mb L: 44/63 MS: 1 EraseBytes- 00:08:22.043 [2024-10-07 09:27:17.462922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.043 [2024-10-07 09:27:17.462952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.043 [2024-10-07 09:27:17.462988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.043 [2024-10-07 09:27:17.463002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.043 #15 NEW cov: 12443 ft: 13604 corp: 5/213b lim: 100 exec/s: 0 rss: 74Mb L: 42/63 MS: 2 ChangeByte-CrossOver- 00:08:22.043 [2024-10-07 09:27:17.503023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.043 [2024-10-07 09:27:17.503048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.043 [2024-10-07 09:27:17.503094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.043 [2024-10-07 09:27:17.503109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.043 #16 NEW cov: 12443 ft: 13839 corp: 6/255b lim: 100 exec/s: 0 rss: 74Mb L: 42/63 MS: 1 ChangeBit- 00:08:22.043 [2024-10-07 09:27:17.563143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.043 [2024-10-07 09:27:17.563170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.043 [2024-10-07 09:27:17.563222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.043 [2024-10-07 09:27:17.563235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.043 #17 NEW cov: 12443 ft: 13929 corp: 7/297b lim: 100 exec/s: 0 rss: 74Mb L: 42/63 MS: 1 ChangeBinInt- 00:08:22.302 [2024-10-07 09:27:17.623375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.302 [2024-10-07 09:27:17.623403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.623441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.302 [2024-10-07 09:27:17.623455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.302 #18 NEW cov: 12443 ft: 14008 corp: 8/342b lim: 100 exec/s: 0 rss: 74Mb L: 45/63 MS: 1 InsertByte- 00:08:22.302 [2024-10-07 09:27:17.683491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.302 [2024-10-07 09:27:17.683517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.683567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.302 [2024-10-07 09:27:17.683586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.302 #19 NEW cov: 12443 ft: 14089 corp: 9/384b lim: 100 exec/s: 0 rss: 74Mb L: 42/63 MS: 1 ChangeBit- 00:08:22.302 [2024-10-07 09:27:17.723849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.302 [2024-10-07 09:27:17.723874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.723947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.302 [2024-10-07 09:27:17.723962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.724013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.302 [2024-10-07 09:27:17.724026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.724077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:22.302 [2024-10-07 09:27:17.724091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.302 #23 NEW cov: 12443 ft: 14423 corp: 10/477b lim: 100 exec/s: 0 rss: 74Mb L: 93/93 MS: 4 ChangeByte-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:22.302 [2024-10-07 09:27:17.763751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.302 [2024-10-07 09:27:17.763776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.763819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.302 [2024-10-07 09:27:17.763850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.302 #24 NEW cov: 12443 ft: 14451 corp: 11/522b lim: 100 exec/s: 0 rss: 75Mb L: 45/93 MS: 1 ChangeBinInt- 00:08:22.302 [2024-10-07 09:27:17.823895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.302 [2024-10-07 09:27:17.823919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.302 [2024-10-07 09:27:17.823976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.302 [2024-10-07 09:27:17.823991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 #25 NEW cov: 12443 ft: 14468 corp: 12/564b lim: 100 exec/s: 0 rss: 75Mb L: 42/93 MS: 1 ChangeBit- 00:08:22.562 [2024-10-07 09:27:17.884073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.562 [2024-10-07 09:27:17.884099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:17.884133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.562 [2024-10-07 09:27:17.884147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:22.562 #26 NEW cov: 12466 ft: 14549 corp: 13/606b lim: 100 exec/s: 0 rss: 75Mb L: 42/93 MS: 1 ChangeBinInt- 00:08:22.562 [2024-10-07 09:27:17.924153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.562 [2024-10-07 09:27:17.924178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:17.924214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.562 [2024-10-07 09:27:17.924231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 #27 NEW cov: 12466 ft: 14560 corp: 14/648b lim: 100 exec/s: 0 rss: 75Mb L: 42/93 MS: 1 ShuffleBytes- 00:08:22.562 [2024-10-07 09:27:17.964302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.562 [2024-10-07 09:27:17.964327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:17.964383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.562 [2024-10-07 09:27:17.964398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 #28 NEW cov: 12466 ft: 14613 corp: 15/695b lim: 100 exec/s: 0 rss: 75Mb L: 47/93 MS: 1 CopyPart- 00:08:22.562 [2024-10-07 09:27:18.004368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.562 [2024-10-07 09:27:18.004394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:18.004430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.562 [2024-10-07 09:27:18.004443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 #29 NEW cov: 12466 ft: 14680 corp: 16/740b lim: 100 exec/s: 29 rss: 75Mb L: 45/93 MS: 1 ChangeBit- 00:08:22.562 [2024-10-07 09:27:18.064787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.562 [2024-10-07 09:27:18.064818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:18.064889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.562 [2024-10-07 09:27:18.064903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:18.064954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.562 [2024-10-07 09:27:18.064966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:18.065018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:22.562 [2024-10-07 09:27:18.065031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.562 #30 NEW cov: 12466 ft: 14707 corp: 17/830b lim: 100 exec/s: 30 rss: 75Mb L: 90/93 MS: 1 InsertRepeatedBytes- 00:08:22.562 [2024-10-07 09:27:18.124890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.562 [2024-10-07 09:27:18.124916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:18.124954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.562 [2024-10-07 09:27:18.124969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.562 [2024-10-07 09:27:18.125023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.562 [2024-10-07 09:27:18.125035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.821 #36 NEW cov: 12466 ft: 14719 corp: 18/893b lim: 100 exec/s: 36 rss: 75Mb L: 63/93 MS: 1 CopyPart- 00:08:22.821 [2024-10-07 09:27:18.164836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.821 [2024-10-07 09:27:18.164861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.821 [2024-10-07 09:27:18.164916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.821 [2024-10-07 09:27:18.164931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.821 #37 NEW cov: 12466 ft: 14781 corp: 19/946b lim: 100 exec/s: 37 rss: 75Mb L: 53/93 MS: 1 CrossOver- 00:08:22.821 [2024-10-07 09:27:18.225148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.821 [2024-10-07 09:27:18.225173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.821 [2024-10-07 09:27:18.225209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.821 [2024-10-07 09:27:18.225224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.821 [2024-10-07 09:27:18.225276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.821 [2024-10-07 09:27:18.225288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.821 #38 NEW cov: 12466 ft: 14821 corp: 20/1009b lim: 100 exec/s: 38 rss: 75Mb L: 63/93 MS: 1 ShuffleBytes- 00:08:22.821 [2024-10-07 09:27:18.265140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.821 [2024-10-07 09:27:18.265165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.821 [2024-10-07 09:27:18.265218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.821 [2024-10-07 09:27:18.265231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.821 #39 NEW cov: 12466 ft: 14852 corp: 21/1052b lim: 100 exec/s: 39 rss: 75Mb L: 43/93 MS: 1 InsertByte- 00:08:22.821 [2024-10-07 09:27:18.325391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:22.821 [2024-10-07 09:27:18.325416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.821 [2024-10-07 09:27:18.325466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:22.821 [2024-10-07 09:27:18.325480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.821 [2024-10-07 09:27:18.325533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:22.821 [2024-10-07 09:27:18.325547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.821 #40 NEW cov: 12466 ft: 14883 corp: 22/1130b lim: 100 exec/s: 40 rss: 75Mb L: 78/93 MS: 1 EraseBytes- 00:08:23.080 [2024-10-07 09:27:18.385566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.080 [2024-10-07 09:27:18.385593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.385654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.080 [2024-10-07 09:27:18.385669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.385722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:23.080 [2024-10-07 09:27:18.385735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.080 #41 NEW cov: 12466 ft: 14889 corp: 23/1193b lim: 100 exec/s: 41 rss: 75Mb L: 63/93 MS: 1 ShuffleBytes- 00:08:23.080 [2024-10-07 09:27:18.445647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.080 [2024-10-07 09:27:18.445679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.445732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.080 [2024-10-07 09:27:18.445747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.080 #42 NEW cov: 12466 ft: 14908 corp: 24/1237b lim: 100 exec/s: 42 rss: 75Mb L: 44/93 MS: 1 ShuffleBytes- 00:08:23.080 [2024-10-07 09:27:18.485762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.080 [2024-10-07 09:27:18.485788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.485833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.080 [2024-10-07 09:27:18.485848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.080 #43 NEW cov: 12466 ft: 14963 corp: 25/1282b lim: 100 exec/s: 43 rss: 75Mb L: 45/93 MS: 1 ShuffleBytes- 00:08:23.080 [2024-10-07 09:27:18.525861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.080 [2024-10-07 09:27:18.525888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.525922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.080 [2024-10-07 09:27:18.525937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.080 #44 NEW cov: 12466 ft: 14977 corp: 26/1327b lim: 100 exec/s: 44 rss: 75Mb L: 45/93 MS: 1 ChangeBit- 00:08:23.080 [2024-10-07 09:27:18.566002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.080 [2024-10-07 09:27:18.566029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.566082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.080 [2024-10-07 09:27:18.566097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.080 #45 NEW cov: 12466 ft: 14999 corp: 27/1377b lim: 100 exec/s: 45 rss: 75Mb L: 50/93 MS: 1 InsertRepeatedBytes- 00:08:23.080 [2024-10-07 09:27:18.606075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.080 [2024-10-07 09:27:18.606102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.080 [2024-10-07 09:27:18.606138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.080 [2024-10-07 09:27:18.606151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.338 #46 NEW cov: 12466 ft: 15003 corp: 28/1431b lim: 100 exec/s: 46 rss: 75Mb L: 54/93 MS: 1 InsertByte- 00:08:23.338 [2024-10-07 09:27:18.666228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.338 [2024-10-07 09:27:18.666253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.338 [2024-10-07 09:27:18.666305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.338 [2024-10-07 09:27:18.666320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.338 #47 NEW cov: 12466 ft: 15004 corp: 29/1475b lim: 100 exec/s: 47 rss: 75Mb L: 44/93 MS: 1 ChangeByte- 00:08:23.338 [2024-10-07 09:27:18.706225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.338 [2024-10-07 09:27:18.706252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.338 #48 NEW cov: 12466 ft: 15338 corp: 30/1502b lim: 100 exec/s: 48 rss: 75Mb L: 27/93 MS: 1 EraseBytes- 00:08:23.338 [2024-10-07 09:27:18.766508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.338 [2024-10-07 09:27:18.766533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.338 [2024-10-07 09:27:18.766570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.338 [2024-10-07 09:27:18.766584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.338 #49 NEW cov: 12466 ft: 15357 corp: 31/1549b lim: 100 exec/s: 49 rss: 75Mb L: 47/93 MS: 1 ChangeBit- 00:08:23.338 [2024-10-07 09:27:18.806562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.338 [2024-10-07 09:27:18.806588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.338 #50 NEW cov: 12466 ft: 15415 corp: 32/1588b lim: 100 exec/s: 50 rss: 75Mb L: 39/93 MS: 1 EraseBytes- 00:08:23.338 [2024-10-07 09:27:18.866809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.338 [2024-10-07 09:27:18.866839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.339 [2024-10-07 09:27:18.866887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.339 [2024-10-07 09:27:18.866900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.598 #51 NEW cov: 12466 ft: 15425 corp: 33/1633b lim: 100 exec/s: 51 rss: 76Mb L: 45/93 MS: 1 InsertByte- 00:08:23.598 [2024-10-07 09:27:18.927017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.598 [2024-10-07 09:27:18.927042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.598 [2024-10-07 09:27:18.927096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.598 [2024-10-07 09:27:18.927112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.598 #52 NEW cov: 12466 ft: 15471 corp: 34/1677b lim: 100 exec/s: 52 rss: 76Mb L: 44/93 MS: 1 ChangeByte- 00:08:23.598 [2024-10-07 09:27:18.987252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:23.598 [2024-10-07 09:27:18.987277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.598 [2024-10-07 09:27:18.987331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:23.598 [2024-10-07 09:27:18.987346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.598 [2024-10-07 09:27:18.987398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:23.598 [2024-10-07 09:27:18.987413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.598 #53 NEW cov: 12466 ft: 15477 corp: 35/1752b lim: 100 exec/s: 26 rss: 76Mb L: 75/93 MS: 1 CopyPart- 00:08:23.598 #53 DONE cov: 12466 ft: 15477 corp: 35/1752b lim: 100 exec/s: 26 rss: 76Mb 00:08:23.598 Done 53 runs in 2 second(s) 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:23.598 09:27:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:23.858 [2024-10-07 09:27:19.177164] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:23.858 [2024-10-07 09:27:19.177238] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494061 ] 00:08:24.117 [2024-10-07 09:27:19.487160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.117 [2024-10-07 09:27:19.579841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.117 [2024-10-07 09:27:19.638942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.117 [2024-10-07 09:27:19.655133] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:24.117 INFO: Running with entropic power schedule (0xFF, 100). 00:08:24.117 INFO: Seed: 149096605 00:08:24.376 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:24.376 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:24.376 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:24.376 INFO: A corpus is not provided, starting from an empty corpus 00:08:24.376 #2 INITED exec/s: 0 rss: 67Mb 00:08:24.376 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:24.376 This may also happen if the target rejected all inputs we tried so far 00:08:24.376 [2024-10-07 09:27:19.710143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069817958399 len:65536 00:08:24.376 [2024-10-07 09:27:19.710177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.376 [2024-10-07 09:27:19.710211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:24.376 [2024-10-07 09:27:19.710230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.376 [2024-10-07 09:27:19.710265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:24.376 [2024-10-07 09:27:19.710282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.376 [2024-10-07 09:27:19.710310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:24.376 [2024-10-07 09:27:19.710326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.636 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:24.636 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:24.636 #9 NEW cov: 12212 ft: 12213 corp: 2/45b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:24.636 [2024-10-07 09:27:20.111190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:24.636 [2024-10-07 09:27:20.111269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.636 [2024-10-07 09:27:20.111307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:24.636 [2024-10-07 09:27:20.111325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.636 [2024-10-07 09:27:20.111354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:24.636 [2024-10-07 09:27:20.111371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.636 [2024-10-07 09:27:20.111400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:24.636 [2024-10-07 09:27:20.111416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.636 #10 NEW cov: 12330 ft: 12655 corp: 3/89b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 ChangeBit- 00:08:24.896 [2024-10-07 09:27:20.211282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:24.896 [2024-10-07 09:27:20.211314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.211362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.211382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.211414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.211431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.211460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709487615 len:65536 00:08:24.896 [2024-10-07 09:27:20.211476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.896 #11 NEW cov: 12336 ft: 12907 corp: 4/133b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 ChangeBinInt- 00:08:24.896 [2024-10-07 09:27:20.311519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069817958399 len:65536 00:08:24.896 [2024-10-07 09:27:20.311555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.311589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.311607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.311638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65325 00:08:24.896 [2024-10-07 09:27:20.311655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.311683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.311700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.896 #12 NEW cov: 12421 ft: 13281 corp: 5/178b lim: 50 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 InsertByte- 00:08:24.896 [2024-10-07 09:27:20.371680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.371711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.371742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:11520 00:08:24.896 [2024-10-07 09:27:20.371760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.371790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.371807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.371859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.371875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.896 #13 NEW cov: 12421 ft: 13404 corp: 6/222b lim: 50 exec/s: 0 rss: 74Mb L: 44/45 MS: 1 CrossOver- 00:08:24.896 [2024-10-07 09:27:20.431863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.431894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.431927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906370199930011647 len:65536 00:08:24.896 [2024-10-07 09:27:20.431945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.431986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073695723519 len:65536 00:08:24.896 [2024-10-07 09:27:20.432003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.896 [2024-10-07 09:27:20.432031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:24.896 [2024-10-07 09:27:20.432047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.161 #14 NEW cov: 12421 ft: 13470 corp: 7/269b lim: 50 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:08:25.161 [2024-10-07 09:27:20.532116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:25.161 [2024-10-07 09:27:20.532152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.161 [2024-10-07 09:27:20.532186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906370199930011647 len:65536 00:08:25.161 [2024-10-07 09:27:20.532205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.161 [2024-10-07 09:27:20.532235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073695723519 len:65536 00:08:25.161 [2024-10-07 09:27:20.532252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.161 [2024-10-07 09:27:20.532280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.161 [2024-10-07 09:27:20.532296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.161 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:25.161 #15 NEW cov: 12438 ft: 13526 corp: 8/316b lim: 50 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 ShuffleBytes- 00:08:25.161 [2024-10-07 09:27:20.632402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:25.161 [2024-10-07 09:27:20.632434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.161 [2024-10-07 09:27:20.632466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709486079 len:65536 00:08:25.161 [2024-10-07 09:27:20.632486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.161 [2024-10-07 09:27:20.632516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:25.161 [2024-10-07 09:27:20.632533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.161 [2024-10-07 09:27:20.632561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446469195802607615 len:65536 00:08:25.161 [2024-10-07 09:27:20.632578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.161 #16 NEW cov: 12438 ft: 13582 corp: 9/364b lim: 50 exec/s: 16 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:08:25.420 [2024-10-07 09:27:20.732640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:25.420 [2024-10-07 09:27:20.732672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.732704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.732722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.732753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709532671 len:65536 00:08:25.420 [2024-10-07 09:27:20.732769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.732797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551365 len:65536 00:08:25.420 [2024-10-07 09:27:20.732822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.420 #17 NEW cov: 12438 ft: 13592 corp: 10/409b lim: 50 exec/s: 17 rss: 74Mb L: 45/48 MS: 1 InsertByte- 00:08:25.420 [2024-10-07 09:27:20.792781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:25.420 [2024-10-07 09:27:20.792819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.792867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.792886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.792917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446499982128185343 len:65536 00:08:25.420 [2024-10-07 09:27:20.792934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.792963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.792979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.420 #18 NEW cov: 12438 ft: 13640 corp: 11/453b lim: 50 exec/s: 18 rss: 74Mb L: 44/48 MS: 1 ChangeByte- 00:08:25.420 [2024-10-07 09:27:20.853000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374686475796086783 len:65536 00:08:25.420 [2024-10-07 09:27:20.853031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.853064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.853082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.853113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551393 len:65536 00:08:25.420 [2024-10-07 09:27:20.853129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.853158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.853174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.420 #19 NEW cov: 12438 ft: 13678 corp: 12/494b lim: 50 exec/s: 19 rss: 74Mb L: 41/48 MS: 1 EraseBytes- 00:08:25.420 [2024-10-07 09:27:20.953269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:25.420 [2024-10-07 09:27:20.953300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.953333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.953352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.953382] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:25.420 [2024-10-07 09:27:20.953399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.420 [2024-10-07 09:27:20.953428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446743979220271103 len:65536 00:08:25.420 [2024-10-07 09:27:20.953450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.679 #20 NEW cov: 12438 ft: 13695 corp: 13/538b lim: 50 exec/s: 20 rss: 75Mb L: 44/48 MS: 1 ChangeByte- 00:08:25.679 [2024-10-07 09:27:21.013378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.013409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.013443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906370199930011647 len:65536 00:08:25.680 [2024-10-07 09:27:21.013461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.013491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073695723519 len:59136 00:08:25.680 [2024-10-07 09:27:21.013508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.013537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.013554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.680 #21 NEW cov: 12438 ft: 13735 corp: 14/585b lim: 50 exec/s: 21 rss: 75Mb L: 47/48 MS: 1 ChangeByte- 00:08:25.680 [2024-10-07 09:27:21.073472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.073506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.073541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906370199930011647 len:65536 00:08:25.680 [2024-10-07 09:27:21.073559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.680 #22 NEW cov: 12438 ft: 14113 corp: 15/613b lim: 50 exec/s: 22 rss: 75Mb L: 28/48 MS: 1 EraseBytes- 00:08:25.680 [2024-10-07 09:27:21.173860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.173894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.173925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906370199930011647 len:65536 00:08:25.680 [2024-10-07 09:27:21.173944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.173975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073695723519 len:65536 00:08:25.680 [2024-10-07 09:27:21.173991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.174020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.174037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.680 #23 NEW cov: 12438 ft: 14122 corp: 16/660b lim: 50 exec/s: 23 rss: 75Mb L: 47/48 MS: 1 ShuffleBytes- 00:08:25.680 [2024-10-07 09:27:21.233975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069817958399 len:65536 00:08:25.680 [2024-10-07 09:27:21.234006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.234043] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.234061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.234090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65325 00:08:25.680 [2024-10-07 09:27:21.234106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.680 [2024-10-07 09:27:21.234150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.680 [2024-10-07 09:27:21.234167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.940 #24 NEW cov: 12438 ft: 14131 corp: 17/705b lim: 50 exec/s: 24 rss: 75Mb L: 45/48 MS: 1 ShuffleBytes- 00:08:25.940 [2024-10-07 09:27:21.334269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:25.940 [2024-10-07 09:27:21.334302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.334335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65417 00:08:25.940 [2024-10-07 09:27:21.334354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.334385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446662707844778239 len:65536 00:08:25.940 [2024-10-07 09:27:21.334403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.334431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446742999967727615 len:65536 00:08:25.940 [2024-10-07 09:27:21.334448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.940 #25 NEW cov: 12438 ft: 14142 corp: 18/754b lim: 50 exec/s: 25 rss: 75Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:08:25.940 [2024-10-07 09:27:21.434526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:25.940 [2024-10-07 09:27:21.434559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.434592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3962934811844149247 len:65335 00:08:25.940 [2024-10-07 09:27:21.434610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.434639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:25.940 [2024-10-07 09:27:21.434656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.434684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.940 [2024-10-07 09:27:21.434700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.940 #26 NEW cov: 12438 ft: 14195 corp: 19/801b lim: 50 exec/s: 26 rss: 75Mb L: 47/49 MS: 1 ShuffleBytes- 00:08:25.940 [2024-10-07 09:27:21.494654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069817958399 len:65536 00:08:25.940 [2024-10-07 09:27:21.494691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.494724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:25.940 [2024-10-07 09:27:21.494742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.494771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65325 00:08:25.940 [2024-10-07 09:27:21.494787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.940 [2024-10-07 09:27:21.494838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:25.940 [2024-10-07 09:27:21.494855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.201 #27 NEW cov: 12438 ft: 14268 corp: 20/846b lim: 50 exec/s: 27 rss: 75Mb L: 45/49 MS: 1 ShuffleBytes- 00:08:26.201 [2024-10-07 09:27:21.554855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744065522991103 len:65536 00:08:26.201 [2024-10-07 09:27:21.554888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.201 [2024-10-07 09:27:21.554922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:26.201 [2024-10-07 09:27:21.554941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.201 [2024-10-07 09:27:21.554972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:26.201 [2024-10-07 09:27:21.554989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.201 [2024-10-07 09:27:21.555017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446743979220271103 len:65536 00:08:26.201 [2024-10-07 09:27:21.555034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.201 #28 NEW cov: 12445 ft: 14299 corp: 21/890b lim: 50 exec/s: 28 rss: 75Mb L: 44/49 MS: 1 ShuffleBytes- 00:08:26.201 [2024-10-07 09:27:21.645098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071562067967 len:65536 00:08:26.201 [2024-10-07 09:27:21.645133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.201 [2024-10-07 09:27:21.645182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906370199930011647 len:65536 00:08:26.201 [2024-10-07 09:27:21.645201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.201 [2024-10-07 09:27:21.645231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073695723519 len:65536 00:08:26.201 [2024-10-07 09:27:21.645248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.201 [2024-10-07 09:27:21.645277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:26.201 [2024-10-07 09:27:21.645305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:26.201 #29 NEW cov: 12445 ft: 14327 corp: 22/937b lim: 50 exec/s: 14 rss: 75Mb L: 47/49 MS: 1 ChangeBit- 00:08:26.201 #29 DONE cov: 12445 ft: 14327 corp: 22/937b lim: 50 exec/s: 14 rss: 75Mb 00:08:26.201 Done 29 runs in 2 second(s) 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:26.462 09:27:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:26.462 [2024-10-07 09:27:21.919253] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:26.462 [2024-10-07 09:27:21.919334] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494417 ] 00:08:26.722 [2024-10-07 09:27:22.235979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.981 [2024-10-07 09:27:22.332195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.981 [2024-10-07 09:27:22.391175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.981 [2024-10-07 09:27:22.407387] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:26.981 INFO: Running with entropic power schedule (0xFF, 100). 00:08:26.981 INFO: Seed: 2901097779 00:08:26.981 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:26.981 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:26.981 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:26.981 INFO: A corpus is not provided, starting from an empty corpus 00:08:26.981 #2 INITED exec/s: 0 rss: 68Mb 00:08:26.981 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:26.981 This may also happen if the target rejected all inputs we tried so far 00:08:26.981 [2024-10-07 09:27:22.463151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:26.981 [2024-10-07 09:27:22.463183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.981 [2024-10-07 09:27:22.463241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:26.981 [2024-10-07 09:27:22.463258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.981 [2024-10-07 09:27:22.463311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:26.981 [2024-10-07 09:27:22.463327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:26.981 [2024-10-07 09:27:22.463382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:26.981 [2024-10-07 09:27:22.463399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.241 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:27.241 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:27.241 #13 NEW cov: 12275 ft: 12270 corp: 2/75b lim: 90 exec/s: 0 rss: 74Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:08:27.241 [2024-10-07 09:27:22.804389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.241 [2024-10-07 09:27:22.804455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.241 [2024-10-07 09:27:22.804537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.241 [2024-10-07 09:27:22.804568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.500 [2024-10-07 09:27:22.804646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.500 [2024-10-07 09:27:22.804675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.500 [2024-10-07 09:27:22.804756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.500 [2024-10-07 09:27:22.804784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.500 #14 NEW cov: 12388 ft: 12854 corp: 3/150b lim: 90 exec/s: 0 rss: 74Mb L: 75/75 MS: 1 InsertByte- 00:08:27.500 [2024-10-07 09:27:22.873850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.500 [2024-10-07 09:27:22.873879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:22.873933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.501 [2024-10-07 09:27:22.873950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.501 #15 NEW cov: 12394 ft: 13650 corp: 4/191b lim: 90 exec/s: 0 rss: 74Mb L: 41/75 MS: 1 EraseBytes- 00:08:27.501 [2024-10-07 09:27:22.914089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.501 [2024-10-07 09:27:22.914117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:22.914171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.501 [2024-10-07 09:27:22.914188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:22.914244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.501 [2024-10-07 09:27:22.914262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.501 #16 NEW cov: 12479 ft: 14113 corp: 5/246b lim: 90 exec/s: 0 rss: 74Mb L: 55/75 MS: 1 InsertRepeatedBytes- 00:08:27.501 [2024-10-07 09:27:22.954381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.501 [2024-10-07 09:27:22.954409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:22.954480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.501 [2024-10-07 09:27:22.954496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:22.954553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.501 [2024-10-07 09:27:22.954568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:22.954623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.501 [2024-10-07 09:27:22.954638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.501 #22 NEW cov: 12479 ft: 14212 corp: 6/324b lim: 90 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 CopyPart- 00:08:27.501 [2024-10-07 09:27:23.014527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.501 [2024-10-07 09:27:23.014555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:23.014626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.501 [2024-10-07 09:27:23.014641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:23.014695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.501 [2024-10-07 09:27:23.014710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.501 [2024-10-07 09:27:23.014763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.501 [2024-10-07 09:27:23.014779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.501 #23 NEW cov: 12479 ft: 14313 corp: 7/409b lim: 90 exec/s: 0 rss: 74Mb L: 85/85 MS: 1 CrossOver- 00:08:27.760 [2024-10-07 09:27:23.074445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.760 [2024-10-07 09:27:23.074475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.760 [2024-10-07 09:27:23.074520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.760 [2024-10-07 09:27:23.074536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.760 #24 NEW cov: 12479 ft: 14407 corp: 8/453b lim: 90 exec/s: 0 rss: 74Mb L: 44/85 MS: 1 EraseBytes- 00:08:27.760 [2024-10-07 09:27:23.114805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.760 [2024-10-07 09:27:23.114836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.114911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.761 [2024-10-07 09:27:23.114927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.114983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.761 [2024-10-07 09:27:23.114999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.115055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.761 [2024-10-07 09:27:23.115071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.761 #25 NEW cov: 12479 ft: 14437 corp: 9/531b lim: 90 exec/s: 0 rss: 74Mb L: 78/85 MS: 1 EraseBytes- 00:08:27.761 [2024-10-07 09:27:23.175002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.761 [2024-10-07 09:27:23.175031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.175079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.761 [2024-10-07 09:27:23.175094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.175150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.761 [2024-10-07 09:27:23.175164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.175221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.761 [2024-10-07 09:27:23.175236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.761 #26 NEW cov: 12479 ft: 14484 corp: 10/606b lim: 90 exec/s: 0 rss: 74Mb L: 75/85 MS: 1 ShuffleBytes- 00:08:27.761 [2024-10-07 09:27:23.214917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.761 [2024-10-07 09:27:23.214945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.215006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.761 [2024-10-07 09:27:23.215021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.215078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.761 [2024-10-07 09:27:23.215094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.761 #32 NEW cov: 12479 ft: 14586 corp: 11/661b lim: 90 exec/s: 0 rss: 74Mb L: 55/85 MS: 1 CopyPart- 00:08:27.761 [2024-10-07 09:27:23.275266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:27.761 [2024-10-07 09:27:23.275294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.275341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:27.761 [2024-10-07 09:27:23.275357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.275410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:27.761 [2024-10-07 09:27:23.275425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.761 [2024-10-07 09:27:23.275479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:27.761 [2024-10-07 09:27:23.275493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:27.761 #33 NEW cov: 12479 ft: 14626 corp: 12/736b lim: 90 exec/s: 0 rss: 75Mb L: 75/85 MS: 1 ShuffleBytes- 00:08:28.021 [2024-10-07 09:27:23.335295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.021 [2024-10-07 09:27:23.335323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.335383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.021 [2024-10-07 09:27:23.335399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.335454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.021 [2024-10-07 09:27:23.335469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.021 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:28.021 #34 NEW cov: 12502 ft: 14669 corp: 13/791b lim: 90 exec/s: 0 rss: 75Mb L: 55/85 MS: 1 CopyPart- 00:08:28.021 [2024-10-07 09:27:23.395614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.021 [2024-10-07 09:27:23.395642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.395689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.021 [2024-10-07 09:27:23.395704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.395760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.021 [2024-10-07 09:27:23.395773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.395847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.021 [2024-10-07 09:27:23.395863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.021 #35 NEW cov: 12502 ft: 14715 corp: 14/865b lim: 90 exec/s: 0 rss: 75Mb L: 74/85 MS: 1 CopyPart- 00:08:28.021 [2024-10-07 09:27:23.435728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.021 [2024-10-07 09:27:23.435756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.435829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.021 [2024-10-07 09:27:23.435845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.435911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.021 [2024-10-07 09:27:23.435927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.435981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.021 [2024-10-07 09:27:23.435998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.021 #36 NEW cov: 12502 ft: 14748 corp: 15/940b lim: 90 exec/s: 36 rss: 75Mb L: 75/85 MS: 1 ChangeByte- 00:08:28.021 [2024-10-07 09:27:23.495703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.021 [2024-10-07 09:27:23.495731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.495795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.021 [2024-10-07 09:27:23.495810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.495872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.021 [2024-10-07 09:27:23.495886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.021 #37 NEW cov: 12502 ft: 14769 corp: 16/996b lim: 90 exec/s: 37 rss: 75Mb L: 56/85 MS: 1 InsertByte- 00:08:28.021 [2024-10-07 09:27:23.535679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.021 [2024-10-07 09:27:23.535707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.021 [2024-10-07 09:27:23.535758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.021 [2024-10-07 09:27:23.535773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.021 #38 NEW cov: 12502 ft: 14782 corp: 17/1040b lim: 90 exec/s: 38 rss: 75Mb L: 44/85 MS: 1 ShuffleBytes- 00:08:28.281 [2024-10-07 09:27:23.596139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.281 [2024-10-07 09:27:23.596168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.596217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.281 [2024-10-07 09:27:23.596234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.596303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.281 [2024-10-07 09:27:23.596318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.596374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.281 [2024-10-07 09:27:23.596390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.281 #39 NEW cov: 12502 ft: 14842 corp: 18/1114b lim: 90 exec/s: 39 rss: 75Mb L: 74/85 MS: 1 CopyPart- 00:08:28.281 [2024-10-07 09:27:23.655883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.281 [2024-10-07 09:27:23.655911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.281 #40 NEW cov: 12502 ft: 15690 corp: 19/1147b lim: 90 exec/s: 40 rss: 75Mb L: 33/85 MS: 1 CrossOver- 00:08:28.281 [2024-10-07 09:27:23.716500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.281 [2024-10-07 09:27:23.716527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.716591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.281 [2024-10-07 09:27:23.716606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.716661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.281 [2024-10-07 09:27:23.716677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.716731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.281 [2024-10-07 09:27:23.716751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.281 #41 NEW cov: 12502 ft: 15713 corp: 20/1221b lim: 90 exec/s: 41 rss: 75Mb L: 74/85 MS: 1 ShuffleBytes- 00:08:28.281 [2024-10-07 09:27:23.776680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.281 [2024-10-07 09:27:23.776708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.281 [2024-10-07 09:27:23.776776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.282 [2024-10-07 09:27:23.776793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.282 [2024-10-07 09:27:23.776851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.282 [2024-10-07 09:27:23.776867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.282 [2024-10-07 09:27:23.776936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.282 [2024-10-07 09:27:23.776952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.282 #42 NEW cov: 12502 ft: 15734 corp: 21/1295b lim: 90 exec/s: 42 rss: 75Mb L: 74/85 MS: 1 ChangeByte- 00:08:28.282 [2024-10-07 09:27:23.816772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.282 [2024-10-07 09:27:23.816803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.282 [2024-10-07 09:27:23.816856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.282 [2024-10-07 09:27:23.816872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.282 [2024-10-07 09:27:23.816924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.282 [2024-10-07 09:27:23.816940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.282 [2024-10-07 09:27:23.816995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.282 [2024-10-07 09:27:23.817010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.282 #43 NEW cov: 12502 ft: 15747 corp: 22/1381b lim: 90 exec/s: 43 rss: 75Mb L: 86/86 MS: 1 InsertByte- 00:08:28.541 [2024-10-07 09:27:23.856879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.541 [2024-10-07 09:27:23.856908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.856965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.541 [2024-10-07 09:27:23.856982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.857040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.541 [2024-10-07 09:27:23.857057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.857114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.541 [2024-10-07 09:27:23.857129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.541 #44 NEW cov: 12502 ft: 15764 corp: 23/1456b lim: 90 exec/s: 44 rss: 75Mb L: 75/86 MS: 1 ChangeByte- 00:08:28.541 [2024-10-07 09:27:23.896688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.541 [2024-10-07 09:27:23.896717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.896770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.541 [2024-10-07 09:27:23.896786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.541 #45 NEW cov: 12502 ft: 15801 corp: 24/1499b lim: 90 exec/s: 45 rss: 75Mb L: 43/86 MS: 1 EraseBytes- 00:08:28.541 [2024-10-07 09:27:23.937120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.541 [2024-10-07 09:27:23.937149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.937197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.541 [2024-10-07 09:27:23.937213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.937268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.541 [2024-10-07 09:27:23.937283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.937336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.541 [2024-10-07 09:27:23.937353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.541 #46 NEW cov: 12502 ft: 15807 corp: 25/1574b lim: 90 exec/s: 46 rss: 75Mb L: 75/86 MS: 1 ChangeBit- 00:08:28.541 [2024-10-07 09:27:23.977257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.541 [2024-10-07 09:27:23.977287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.977329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.541 [2024-10-07 09:27:23.977346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.541 [2024-10-07 09:27:23.977400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.542 [2024-10-07 09:27:23.977416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:23.977472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.542 [2024-10-07 09:27:23.977488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.542 #47 NEW cov: 12502 ft: 15840 corp: 26/1663b lim: 90 exec/s: 47 rss: 75Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:28.542 [2024-10-07 09:27:24.017184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.542 [2024-10-07 09:27:24.017213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:24.017253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.542 [2024-10-07 09:27:24.017271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:24.017327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.542 [2024-10-07 09:27:24.017343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.542 #48 NEW cov: 12502 ft: 15862 corp: 27/1717b lim: 90 exec/s: 48 rss: 75Mb L: 54/89 MS: 1 InsertRepeatedBytes- 00:08:28.542 [2024-10-07 09:27:24.057476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.542 [2024-10-07 09:27:24.057504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:24.057551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.542 [2024-10-07 09:27:24.057567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:24.057621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.542 [2024-10-07 09:27:24.057636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:24.057694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.542 [2024-10-07 09:27:24.057710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.542 #49 NEW cov: 12502 ft: 15877 corp: 28/1795b lim: 90 exec/s: 49 rss: 75Mb L: 78/89 MS: 1 ChangeBit- 00:08:28.542 [2024-10-07 09:27:24.097274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.542 [2024-10-07 09:27:24.097302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.542 [2024-10-07 09:27:24.097342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.542 [2024-10-07 09:27:24.097359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.801 #50 NEW cov: 12502 ft: 15933 corp: 29/1847b lim: 90 exec/s: 50 rss: 75Mb L: 52/89 MS: 1 InsertRepeatedBytes- 00:08:28.801 [2024-10-07 09:27:24.157540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.801 [2024-10-07 09:27:24.157569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.801 [2024-10-07 09:27:24.157631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.801 [2024-10-07 09:27:24.157647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.801 [2024-10-07 09:27:24.157702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.801 [2024-10-07 09:27:24.157719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.801 #51 NEW cov: 12502 ft: 15942 corp: 30/1918b lim: 90 exec/s: 51 rss: 75Mb L: 71/89 MS: 1 CrossOver- 00:08:28.801 [2024-10-07 09:27:24.197857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.801 [2024-10-07 09:27:24.197886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.801 [2024-10-07 09:27:24.197934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.801 [2024-10-07 09:27:24.197949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.801 [2024-10-07 09:27:24.198004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.801 [2024-10-07 09:27:24.198021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.801 [2024-10-07 09:27:24.198076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.801 [2024-10-07 09:27:24.198097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.801 #52 NEW cov: 12502 ft: 15986 corp: 31/1993b lim: 90 exec/s: 52 rss: 75Mb L: 75/89 MS: 1 InsertByte- 00:08:28.801 [2024-10-07 09:27:24.238056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.802 [2024-10-07 09:27:24.238085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.238154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.802 [2024-10-07 09:27:24.238171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.238227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.802 [2024-10-07 09:27:24.238242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.238300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.802 [2024-10-07 09:27:24.238314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.802 #53 NEW cov: 12502 ft: 16022 corp: 32/2072b lim: 90 exec/s: 53 rss: 75Mb L: 79/89 MS: 1 InsertByte- 00:08:28.802 [2024-10-07 09:27:24.298189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.802 [2024-10-07 09:27:24.298217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.298287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.802 [2024-10-07 09:27:24.298304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.298359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.802 [2024-10-07 09:27:24.298375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.298432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.802 [2024-10-07 09:27:24.298447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.802 #54 NEW cov: 12502 ft: 16036 corp: 33/2161b lim: 90 exec/s: 54 rss: 75Mb L: 89/89 MS: 1 CopyPart- 00:08:28.802 [2024-10-07 09:27:24.358314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:28.802 [2024-10-07 09:27:24.358342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.358411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:28.802 [2024-10-07 09:27:24.358427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.358481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:28.802 [2024-10-07 09:27:24.358497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.802 [2024-10-07 09:27:24.358554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:28.802 [2024-10-07 09:27:24.358570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.061 #55 NEW cov: 12502 ft: 16062 corp: 34/2246b lim: 90 exec/s: 55 rss: 75Mb L: 85/89 MS: 1 ChangeByte- 00:08:29.061 [2024-10-07 09:27:24.397968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.061 [2024-10-07 09:27:24.397996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.061 #56 NEW cov: 12502 ft: 16094 corp: 35/2279b lim: 90 exec/s: 56 rss: 76Mb L: 33/89 MS: 1 ChangeByte- 00:08:29.061 [2024-10-07 09:27:24.458589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:29.061 [2024-10-07 09:27:24.458617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.061 [2024-10-07 09:27:24.458678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:29.061 [2024-10-07 09:27:24.458694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.061 [2024-10-07 09:27:24.458750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:29.061 [2024-10-07 09:27:24.458767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.061 [2024-10-07 09:27:24.458826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:29.061 [2024-10-07 09:27:24.458843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.061 #57 NEW cov: 12502 ft: 16118 corp: 36/2356b lim: 90 exec/s: 28 rss: 76Mb L: 77/89 MS: 1 CrossOver- 00:08:29.061 #57 DONE cov: 12502 ft: 16118 corp: 36/2356b lim: 90 exec/s: 28 rss: 76Mb 00:08:29.061 Done 57 runs in 2 second(s) 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:29.061 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:29.321 09:27:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:29.321 [2024-10-07 09:27:24.665876] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:29.321 [2024-10-07 09:27:24.665955] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494776 ] 00:08:29.580 [2024-10-07 09:27:24.974212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.580 [2024-10-07 09:27:25.064129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.580 [2024-10-07 09:27:25.122972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.580 [2024-10-07 09:27:25.139196] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:29.839 INFO: Running with entropic power schedule (0xFF, 100). 00:08:29.839 INFO: Seed: 1336153915 00:08:29.839 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:29.839 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:29.839 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:29.839 INFO: A corpus is not provided, starting from an empty corpus 00:08:29.839 #2 INITED exec/s: 0 rss: 67Mb 00:08:29.839 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:29.839 This may also happen if the target rejected all inputs we tried so far 00:08:29.839 [2024-10-07 09:27:25.188641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:29.839 [2024-10-07 09:27:25.188674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.839 [2024-10-07 09:27:25.188716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:29.839 [2024-10-07 09:27:25.188732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.839 [2024-10-07 09:27:25.188788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:29.839 [2024-10-07 09:27:25.188805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.098 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:30.098 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:30.098 #3 NEW cov: 12250 ft: 12246 corp: 2/34b lim: 50 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:08:30.098 [2024-10-07 09:27:25.529277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.098 [2024-10-07 09:27:25.529316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.098 [2024-10-07 09:27:25.529372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.098 [2024-10-07 09:27:25.529387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.098 #4 NEW cov: 12363 ft: 13240 corp: 3/57b lim: 50 exec/s: 0 rss: 74Mb L: 23/33 MS: 1 InsertRepeatedBytes- 00:08:30.098 [2024-10-07 09:27:25.569372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.098 [2024-10-07 09:27:25.569400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.098 [2024-10-07 09:27:25.569437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.098 [2024-10-07 09:27:25.569453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.098 [2024-10-07 09:27:25.569509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.098 [2024-10-07 09:27:25.569524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.098 #8 NEW cov: 12369 ft: 13552 corp: 4/91b lim: 50 exec/s: 0 rss: 74Mb L: 34/34 MS: 4 ShuffleBytes-InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:30.098 [2024-10-07 09:27:25.609530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.098 [2024-10-07 09:27:25.609557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.098 [2024-10-07 09:27:25.609619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.098 [2024-10-07 09:27:25.609635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.098 [2024-10-07 09:27:25.609688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.098 [2024-10-07 09:27:25.609704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.098 #9 NEW cov: 12454 ft: 13806 corp: 5/125b lim: 50 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeByte- 00:08:30.357 [2024-10-07 09:27:25.669496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.357 [2024-10-07 09:27:25.669524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.669574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.357 [2024-10-07 09:27:25.669590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.357 #10 NEW cov: 12454 ft: 13923 corp: 6/149b lim: 50 exec/s: 0 rss: 74Mb L: 24/34 MS: 1 InsertByte- 00:08:30.357 [2024-10-07 09:27:25.729854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.357 [2024-10-07 09:27:25.729880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.729917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.357 [2024-10-07 09:27:25.729933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.729987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.357 [2024-10-07 09:27:25.730003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.357 #11 NEW cov: 12454 ft: 14040 corp: 7/187b lim: 50 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 CopyPart- 00:08:30.357 [2024-10-07 09:27:25.790013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.357 [2024-10-07 09:27:25.790039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.790094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.357 [2024-10-07 09:27:25.790109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.790163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.357 [2024-10-07 09:27:25.790177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.357 #12 NEW cov: 12454 ft: 14111 corp: 8/219b lim: 50 exec/s: 0 rss: 74Mb L: 32/38 MS: 1 InsertRepeatedBytes- 00:08:30.357 [2024-10-07 09:27:25.830115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.357 [2024-10-07 09:27:25.830142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.830179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.357 [2024-10-07 09:27:25.830194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.830246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.357 [2024-10-07 09:27:25.830261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.357 #13 NEW cov: 12454 ft: 14164 corp: 9/252b lim: 50 exec/s: 0 rss: 74Mb L: 33/38 MS: 1 ChangeBit- 00:08:30.357 [2024-10-07 09:27:25.870041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.357 [2024-10-07 09:27:25.870067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.357 [2024-10-07 09:27:25.870122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.357 [2024-10-07 09:27:25.870138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.357 #14 NEW cov: 12454 ft: 14193 corp: 10/274b lim: 50 exec/s: 0 rss: 74Mb L: 22/38 MS: 1 InsertRepeatedBytes- 00:08:30.357 [2024-10-07 09:27:25.910002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.357 [2024-10-07 09:27:25.910030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.616 #15 NEW cov: 12454 ft: 14949 corp: 11/291b lim: 50 exec/s: 0 rss: 74Mb L: 17/38 MS: 1 EraseBytes- 00:08:30.616 [2024-10-07 09:27:25.970490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.616 [2024-10-07 09:27:25.970519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.616 [2024-10-07 09:27:25.970554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.616 [2024-10-07 09:27:25.970570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.616 [2024-10-07 09:27:25.970625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.616 [2024-10-07 09:27:25.970642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.616 #16 NEW cov: 12454 ft: 14966 corp: 12/324b lim: 50 exec/s: 0 rss: 74Mb L: 33/38 MS: 1 CrossOver- 00:08:30.616 [2024-10-07 09:27:26.010323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.616 [2024-10-07 09:27:26.010349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.616 #17 NEW cov: 12454 ft: 14981 corp: 13/342b lim: 50 exec/s: 0 rss: 74Mb L: 18/38 MS: 1 InsertByte- 00:08:30.616 [2024-10-07 09:27:26.070657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.616 [2024-10-07 09:27:26.070684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.616 [2024-10-07 09:27:26.070738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.616 [2024-10-07 09:27:26.070755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.616 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:30.616 #18 NEW cov: 12477 ft: 15062 corp: 14/364b lim: 50 exec/s: 0 rss: 75Mb L: 22/38 MS: 1 ChangeBinInt- 00:08:30.616 [2024-10-07 09:27:26.130773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.616 [2024-10-07 09:27:26.130799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.616 [2024-10-07 09:27:26.130857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.616 [2024-10-07 09:27:26.130874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.616 #19 NEW cov: 12477 ft: 15109 corp: 15/387b lim: 50 exec/s: 19 rss: 75Mb L: 23/38 MS: 1 InsertByte- 00:08:30.876 [2024-10-07 09:27:26.190990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.876 [2024-10-07 09:27:26.191017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.191082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.876 [2024-10-07 09:27:26.191099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.876 #20 NEW cov: 12477 ft: 15157 corp: 16/409b lim: 50 exec/s: 20 rss: 75Mb L: 22/38 MS: 1 ChangeBinInt- 00:08:30.876 [2024-10-07 09:27:26.231224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.876 [2024-10-07 09:27:26.231253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.231305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.876 [2024-10-07 09:27:26.231321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.231374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.876 [2024-10-07 09:27:26.231389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.876 #21 NEW cov: 12477 ft: 15175 corp: 17/441b lim: 50 exec/s: 21 rss: 75Mb L: 32/38 MS: 1 ChangeBit- 00:08:30.876 [2024-10-07 09:27:26.291410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.876 [2024-10-07 09:27:26.291437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.291475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.876 [2024-10-07 09:27:26.291490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.291545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.876 [2024-10-07 09:27:26.291561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.876 #22 NEW cov: 12477 ft: 15187 corp: 18/474b lim: 50 exec/s: 22 rss: 75Mb L: 33/38 MS: 1 InsertRepeatedBytes- 00:08:30.876 [2024-10-07 09:27:26.351598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.876 [2024-10-07 09:27:26.351624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.351677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.876 [2024-10-07 09:27:26.351694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.351751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.876 [2024-10-07 09:27:26.351767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.876 #23 NEW cov: 12477 ft: 15264 corp: 19/507b lim: 50 exec/s: 23 rss: 75Mb L: 33/38 MS: 1 ChangeBit- 00:08:30.876 [2024-10-07 09:27:26.391668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.876 [2024-10-07 09:27:26.391695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.391757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.876 [2024-10-07 09:27:26.391773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.391834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.876 [2024-10-07 09:27:26.391849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.876 #24 NEW cov: 12477 ft: 15333 corp: 20/540b lim: 50 exec/s: 24 rss: 75Mb L: 33/38 MS: 1 ChangeBit- 00:08:30.876 [2024-10-07 09:27:26.431946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:30.876 [2024-10-07 09:27:26.431974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.432026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:30.876 [2024-10-07 09:27:26.432041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.432096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:30.876 [2024-10-07 09:27:26.432112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.876 [2024-10-07 09:27:26.432167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:30.876 [2024-10-07 09:27:26.432183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.135 #25 NEW cov: 12477 ft: 15671 corp: 21/584b lim: 50 exec/s: 25 rss: 75Mb L: 44/44 MS: 1 CrossOver- 00:08:31.135 [2024-10-07 09:27:26.471607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.135 [2024-10-07 09:27:26.471635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.135 #26 NEW cov: 12477 ft: 15740 corp: 22/602b lim: 50 exec/s: 26 rss: 75Mb L: 18/44 MS: 1 InsertByte- 00:08:31.135 [2024-10-07 09:27:26.511910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.136 [2024-10-07 09:27:26.511939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.511976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.136 [2024-10-07 09:27:26.511991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.136 #27 NEW cov: 12477 ft: 15744 corp: 23/624b lim: 50 exec/s: 27 rss: 75Mb L: 22/44 MS: 1 ChangeBit- 00:08:31.136 [2024-10-07 09:27:26.572336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.136 [2024-10-07 09:27:26.572363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.572408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.136 [2024-10-07 09:27:26.572427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.572481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.136 [2024-10-07 09:27:26.572496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.572547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:31.136 [2024-10-07 09:27:26.572561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.136 #28 NEW cov: 12477 ft: 15776 corp: 24/668b lim: 50 exec/s: 28 rss: 75Mb L: 44/44 MS: 1 ChangeBinInt- 00:08:31.136 [2024-10-07 09:27:26.632500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.136 [2024-10-07 09:27:26.632527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.632581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.136 [2024-10-07 09:27:26.632597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.632651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.136 [2024-10-07 09:27:26.632666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.632721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:31.136 [2024-10-07 09:27:26.632736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.136 #29 NEW cov: 12477 ft: 15785 corp: 25/710b lim: 50 exec/s: 29 rss: 75Mb L: 42/44 MS: 1 CMP- DE: "\377$\365\"\221\203l\004"- 00:08:31.136 [2024-10-07 09:27:26.692528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.136 [2024-10-07 09:27:26.692557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.692593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.136 [2024-10-07 09:27:26.692609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.136 [2024-10-07 09:27:26.692662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.136 [2024-10-07 09:27:26.692677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 #30 NEW cov: 12477 ft: 15789 corp: 26/743b lim: 50 exec/s: 30 rss: 75Mb L: 33/44 MS: 1 CopyPart- 00:08:31.396 [2024-10-07 09:27:26.752661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.396 [2024-10-07 09:27:26.752689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.752750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.396 [2024-10-07 09:27:26.752766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.752824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.396 [2024-10-07 09:27:26.752842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 #31 NEW cov: 12477 ft: 15811 corp: 27/777b lim: 50 exec/s: 31 rss: 75Mb L: 34/44 MS: 1 ChangeASCIIInt- 00:08:31.396 [2024-10-07 09:27:26.792736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.396 [2024-10-07 09:27:26.792763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.792809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.396 [2024-10-07 09:27:26.792830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.792884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.396 [2024-10-07 09:27:26.792900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 #32 NEW cov: 12477 ft: 15821 corp: 28/810b lim: 50 exec/s: 32 rss: 75Mb L: 33/44 MS: 1 ChangeBit- 00:08:31.396 [2024-10-07 09:27:26.852791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.396 [2024-10-07 09:27:26.852820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.852860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.396 [2024-10-07 09:27:26.852875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 #33 NEW cov: 12477 ft: 15824 corp: 29/833b lim: 50 exec/s: 33 rss: 75Mb L: 23/44 MS: 1 CopyPart- 00:08:31.396 [2024-10-07 09:27:26.893014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.396 [2024-10-07 09:27:26.893041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.893089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.396 [2024-10-07 09:27:26.893105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.893174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.396 [2024-10-07 09:27:26.893190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 #34 NEW cov: 12477 ft: 15846 corp: 30/867b lim: 50 exec/s: 34 rss: 75Mb L: 34/44 MS: 1 InsertByte- 00:08:31.396 [2024-10-07 09:27:26.953077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.396 [2024-10-07 09:27:26.953104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-10-07 09:27:26.953144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.396 [2024-10-07 09:27:26.953159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 #35 NEW cov: 12477 ft: 15858 corp: 31/894b lim: 50 exec/s: 35 rss: 75Mb L: 27/44 MS: 1 CopyPart- 00:08:31.656 [2024-10-07 09:27:26.993232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.656 [2024-10-07 09:27:26.993258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:26.993314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.656 [2024-10-07 09:27:26.993330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 #36 NEW cov: 12477 ft: 15919 corp: 32/916b lim: 50 exec/s: 36 rss: 75Mb L: 22/44 MS: 1 EraseBytes- 00:08:31.656 [2024-10-07 09:27:27.053684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.656 [2024-10-07 09:27:27.053711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.053774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.656 [2024-10-07 09:27:27.053791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.053847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.656 [2024-10-07 09:27:27.053863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.053918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:31.656 [2024-10-07 09:27:27.053935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #37 NEW cov: 12477 ft: 15923 corp: 33/962b lim: 50 exec/s: 37 rss: 76Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:08:31.656 [2024-10-07 09:27:27.113870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.656 [2024-10-07 09:27:27.113897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.113967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.656 [2024-10-07 09:27:27.113983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.114038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:31.656 [2024-10-07 09:27:27.114052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.114108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:31.656 [2024-10-07 09:27:27.114122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #38 NEW cov: 12477 ft: 15931 corp: 34/1009b lim: 50 exec/s: 38 rss: 76Mb L: 47/47 MS: 1 CopyPart- 00:08:31.656 [2024-10-07 09:27:27.173677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:31.656 [2024-10-07 09:27:27.173703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-10-07 09:27:27.173741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:31.656 [2024-10-07 09:27:27.173755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 #39 NEW cov: 12477 ft: 15939 corp: 35/1032b lim: 50 exec/s: 19 rss: 76Mb L: 23/47 MS: 1 ChangeByte- 00:08:31.656 #39 DONE cov: 12477 ft: 15939 corp: 35/1032b lim: 50 exec/s: 19 rss: 76Mb 00:08:31.656 ###### Recommended dictionary. ###### 00:08:31.656 "\377$\365\"\221\203l\004" # Uses: 0 00:08:31.656 ###### End of recommended dictionary. ###### 00:08:31.656 Done 39 runs in 2 second(s) 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:31.916 09:27:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:31.916 [2024-10-07 09:27:27.413133] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:31.916 [2024-10-07 09:27:27.413222] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495138 ] 00:08:32.175 [2024-10-07 09:27:27.725701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.435 [2024-10-07 09:27:27.821051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.435 [2024-10-07 09:27:27.880330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.435 [2024-10-07 09:27:27.896512] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:32.435 INFO: Running with entropic power schedule (0xFF, 100). 00:08:32.435 INFO: Seed: 4095123529 00:08:32.435 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:32.435 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:32.435 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:32.435 INFO: A corpus is not provided, starting from an empty corpus 00:08:32.435 #2 INITED exec/s: 0 rss: 67Mb 00:08:32.435 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:32.435 This may also happen if the target rejected all inputs we tried so far 00:08:32.435 [2024-10-07 09:27:27.941525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.435 [2024-10-07 09:27:27.941558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.435 [2024-10-07 09:27:27.941608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.435 [2024-10-07 09:27:27.941627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.435 [2024-10-07 09:27:27.941657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.435 [2024-10-07 09:27:27.941674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.435 [2024-10-07 09:27:27.941708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.435 [2024-10-07 09:27:27.941724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.953 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:32.953 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:32.953 #4 NEW cov: 12276 ft: 12243 corp: 2/70b lim: 85 exec/s: 0 rss: 74Mb L: 69/69 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:32.953 [2024-10-07 09:27:28.302456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.953 [2024-10-07 09:27:28.302500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.302552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.953 [2024-10-07 09:27:28.302570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.302600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.953 [2024-10-07 09:27:28.302617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.302646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.953 [2024-10-07 09:27:28.302663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.953 #10 NEW cov: 12389 ft: 12809 corp: 3/140b lim: 85 exec/s: 0 rss: 74Mb L: 70/70 MS: 1 InsertByte- 00:08:32.953 [2024-10-07 09:27:28.392564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.953 [2024-10-07 09:27:28.392596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.392645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.953 [2024-10-07 09:27:28.392663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.392693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.953 [2024-10-07 09:27:28.392710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.392739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.953 [2024-10-07 09:27:28.392755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.953 #11 NEW cov: 12395 ft: 13195 corp: 4/218b lim: 85 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:08:32.953 [2024-10-07 09:27:28.452688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:32.953 [2024-10-07 09:27:28.452718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.452766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:32.953 [2024-10-07 09:27:28.452784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.452822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:32.953 [2024-10-07 09:27:28.452839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.953 [2024-10-07 09:27:28.452874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:32.953 [2024-10-07 09:27:28.452891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.213 #12 NEW cov: 12480 ft: 13420 corp: 5/296b lim: 85 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 ChangeBit- 00:08:33.213 [2024-10-07 09:27:28.542940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.213 [2024-10-07 09:27:28.542979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.543030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.213 [2024-10-07 09:27:28.543049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.543080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.213 [2024-10-07 09:27:28.543097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.543127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.213 [2024-10-07 09:27:28.543144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.213 #13 NEW cov: 12480 ft: 13527 corp: 6/374b lim: 85 exec/s: 0 rss: 74Mb L: 78/78 MS: 1 ChangeBit- 00:08:33.213 [2024-10-07 09:27:28.603116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.213 [2024-10-07 09:27:28.603146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.603179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.213 [2024-10-07 09:27:28.603197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.603228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.213 [2024-10-07 09:27:28.603244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.603289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.213 [2024-10-07 09:27:28.603306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.213 #14 NEW cov: 12480 ft: 13631 corp: 7/445b lim: 85 exec/s: 0 rss: 74Mb L: 71/78 MS: 1 InsertByte- 00:08:33.213 [2024-10-07 09:27:28.693353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.213 [2024-10-07 09:27:28.693383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.213 [2024-10-07 09:27:28.693416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.213 [2024-10-07 09:27:28.693433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.214 [2024-10-07 09:27:28.693464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.214 [2024-10-07 09:27:28.693481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.214 [2024-10-07 09:27:28.693510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.214 [2024-10-07 09:27:28.693530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.214 #15 NEW cov: 12480 ft: 13670 corp: 8/516b lim: 85 exec/s: 0 rss: 74Mb L: 71/78 MS: 1 ChangeBit- 00:08:33.474 [2024-10-07 09:27:28.783389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.474 [2024-10-07 09:27:28.783419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.474 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:33.474 #16 NEW cov: 12497 ft: 14590 corp: 9/533b lim: 85 exec/s: 0 rss: 74Mb L: 17/78 MS: 1 CrossOver- 00:08:33.474 [2024-10-07 09:27:28.853540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.474 [2024-10-07 09:27:28.853569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.474 #17 NEW cov: 12497 ft: 14604 corp: 10/550b lim: 85 exec/s: 17 rss: 74Mb L: 17/78 MS: 1 ChangeBit- 00:08:33.474 [2024-10-07 09:27:28.943972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.474 [2024-10-07 09:27:28.944001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.474 [2024-10-07 09:27:28.944049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.474 [2024-10-07 09:27:28.944067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.474 [2024-10-07 09:27:28.944098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.474 [2024-10-07 09:27:28.944114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.474 [2024-10-07 09:27:28.944143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.474 [2024-10-07 09:27:28.944159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.474 #18 NEW cov: 12497 ft: 14686 corp: 11/630b lim: 85 exec/s: 18 rss: 74Mb L: 80/80 MS: 1 CrossOver- 00:08:33.474 [2024-10-07 09:27:29.034279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.474 [2024-10-07 09:27:29.034309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.474 [2024-10-07 09:27:29.034358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.474 [2024-10-07 09:27:29.034377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.474 [2024-10-07 09:27:29.034408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.474 [2024-10-07 09:27:29.034425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.474 [2024-10-07 09:27:29.034455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.474 [2024-10-07 09:27:29.034472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.733 #19 NEW cov: 12497 ft: 14747 corp: 12/708b lim: 85 exec/s: 19 rss: 74Mb L: 78/80 MS: 1 ChangeBit- 00:08:33.734 [2024-10-07 09:27:29.094433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.734 [2024-10-07 09:27:29.094466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.734 [2024-10-07 09:27:29.094499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.734 [2024-10-07 09:27:29.094521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.734 [2024-10-07 09:27:29.094552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.734 [2024-10-07 09:27:29.094569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.734 [2024-10-07 09:27:29.094597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.734 [2024-10-07 09:27:29.094614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.734 #20 NEW cov: 12497 ft: 14848 corp: 13/788b lim: 85 exec/s: 20 rss: 74Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:08:33.734 [2024-10-07 09:27:29.154363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.734 [2024-10-07 09:27:29.154393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.734 #25 NEW cov: 12497 ft: 14905 corp: 14/813b lim: 85 exec/s: 25 rss: 74Mb L: 25/80 MS: 5 CrossOver-ChangeBit-ShuffleBytes-ShuffleBytes-CopyPart- 00:08:33.734 [2024-10-07 09:27:29.224706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.734 [2024-10-07 09:27:29.224737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.734 [2024-10-07 09:27:29.224787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.734 [2024-10-07 09:27:29.224805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.734 [2024-10-07 09:27:29.224846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.734 [2024-10-07 09:27:29.224863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.734 [2024-10-07 09:27:29.224893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.734 [2024-10-07 09:27:29.224909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.993 #26 NEW cov: 12497 ft: 14937 corp: 15/896b lim: 85 exec/s: 26 rss: 75Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:08:33.993 [2024-10-07 09:27:29.315025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.993 [2024-10-07 09:27:29.315057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.993 [2024-10-07 09:27:29.315091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.994 [2024-10-07 09:27:29.315109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.315140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.994 [2024-10-07 09:27:29.315156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.315186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.994 [2024-10-07 09:27:29.315202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.994 #27 NEW cov: 12497 ft: 14969 corp: 16/967b lim: 85 exec/s: 27 rss: 75Mb L: 71/83 MS: 1 ChangeBinInt- 00:08:33.994 [2024-10-07 09:27:29.405248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.994 [2024-10-07 09:27:29.405284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.405318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.994 [2024-10-07 09:27:29.405336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.405365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.994 [2024-10-07 09:27:29.405381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.405427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.994 [2024-10-07 09:27:29.405444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.994 #28 NEW cov: 12497 ft: 14997 corp: 17/1047b lim: 85 exec/s: 28 rss: 75Mb L: 80/83 MS: 1 CopyPart- 00:08:33.994 [2024-10-07 09:27:29.495397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:33.994 [2024-10-07 09:27:29.495427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.495474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:33.994 [2024-10-07 09:27:29.495492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.495523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:33.994 [2024-10-07 09:27:29.495540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.994 [2024-10-07 09:27:29.495569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:33.994 [2024-10-07 09:27:29.495585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.253 #29 NEW cov: 12497 ft: 15008 corp: 18/1119b lim: 85 exec/s: 29 rss: 75Mb L: 72/83 MS: 1 InsertByte- 00:08:34.253 [2024-10-07 09:27:29.585690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.253 [2024-10-07 09:27:29.585722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.585756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.254 [2024-10-07 09:27:29.585774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.585805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:34.254 [2024-10-07 09:27:29.585830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.585860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:34.254 [2024-10-07 09:27:29.585877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.254 #30 NEW cov: 12497 ft: 15031 corp: 19/1197b lim: 85 exec/s: 30 rss: 75Mb L: 78/83 MS: 1 CrossOver- 00:08:34.254 [2024-10-07 09:27:29.645846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.254 [2024-10-07 09:27:29.645875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.645908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.254 [2024-10-07 09:27:29.645930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.645962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:34.254 [2024-10-07 09:27:29.645978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.646023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:34.254 [2024-10-07 09:27:29.646039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.254 #31 NEW cov: 12497 ft: 15058 corp: 20/1275b lim: 85 exec/s: 31 rss: 75Mb L: 78/83 MS: 1 ChangeBit- 00:08:34.254 [2024-10-07 09:27:29.735901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.254 [2024-10-07 09:27:29.735930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.254 [2024-10-07 09:27:29.735978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.254 [2024-10-07 09:27:29.735996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.254 #32 NEW cov: 12497 ft: 15412 corp: 21/1320b lim: 85 exec/s: 32 rss: 75Mb L: 45/83 MS: 1 InsertRepeatedBytes- 00:08:34.514 [2024-10-07 09:27:29.826323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.514 [2024-10-07 09:27:29.826354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.826388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.514 [2024-10-07 09:27:29.826406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.826438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:34.514 [2024-10-07 09:27:29.826454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.826483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:34.514 [2024-10-07 09:27:29.826500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.514 #33 NEW cov: 12504 ft: 15458 corp: 22/1400b lim: 85 exec/s: 33 rss: 75Mb L: 80/83 MS: 1 InsertRepeatedBytes- 00:08:34.514 [2024-10-07 09:27:29.886461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.514 [2024-10-07 09:27:29.886492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.886524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.514 [2024-10-07 09:27:29.886542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.886573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:34.514 [2024-10-07 09:27:29.886589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.886618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:34.514 [2024-10-07 09:27:29.886635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.514 #34 NEW cov: 12504 ft: 15477 corp: 23/1483b lim: 85 exec/s: 34 rss: 75Mb L: 83/83 MS: 1 CrossOver- 00:08:34.514 [2024-10-07 09:27:29.936491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:34.514 [2024-10-07 09:27:29.936520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.936567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:34.514 [2024-10-07 09:27:29.936584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.514 [2024-10-07 09:27:29.936615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:34.514 [2024-10-07 09:27:29.936633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.514 #35 NEW cov: 12504 ft: 15778 corp: 24/1546b lim: 85 exec/s: 17 rss: 75Mb L: 63/83 MS: 1 EraseBytes- 00:08:34.514 #35 DONE cov: 12504 ft: 15778 corp: 24/1546b lim: 85 exec/s: 17 rss: 75Mb 00:08:34.514 Done 35 runs in 2 second(s) 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:34.775 09:27:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:34.775 [2024-10-07 09:27:30.204200] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:34.775 [2024-10-07 09:27:30.204283] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495491 ] 00:08:35.034 [2024-10-07 09:27:30.519627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.294 [2024-10-07 09:27:30.606152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.294 [2024-10-07 09:27:30.665268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.294 [2024-10-07 09:27:30.681464] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:35.294 INFO: Running with entropic power schedule (0xFF, 100). 00:08:35.294 INFO: Seed: 2585154152 00:08:35.294 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:35.294 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:35.294 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:35.294 INFO: A corpus is not provided, starting from an empty corpus 00:08:35.294 #2 INITED exec/s: 0 rss: 67Mb 00:08:35.294 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:35.294 This may also happen if the target rejected all inputs we tried so far 00:08:35.294 [2024-10-07 09:27:30.759180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.294 [2024-10-07 09:27:30.759234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.295 [2024-10-07 09:27:30.759324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.295 [2024-10-07 09:27:30.759351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.295 [2024-10-07 09:27:30.759479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.295 [2024-10-07 09:27:30.759511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.295 [2024-10-07 09:27:30.759644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:35.295 [2024-10-07 09:27:30.759664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.554 NEW_FUNC[1/715]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:35.554 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:35.554 #9 NEW cov: 12191 ft: 12189 corp: 2/23b lim: 25 exec/s: 0 rss: 74Mb L: 22/22 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:35.554 [2024-10-07 09:27:31.098933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.554 [2024-10-07 09:27:31.098979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.813 #10 NEW cov: 12321 ft: 13264 corp: 3/29b lim: 25 exec/s: 0 rss: 74Mb L: 6/22 MS: 1 CrossOver- 00:08:35.813 [2024-10-07 09:27:31.169869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.813 [2024-10-07 09:27:31.169898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.169975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.813 [2024-10-07 09:27:31.169992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.170076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.813 [2024-10-07 09:27:31.170092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.170175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:35.813 [2024-10-07 09:27:31.170194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.813 #11 NEW cov: 12327 ft: 13630 corp: 4/52b lim: 25 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 InsertByte- 00:08:35.813 [2024-10-07 09:27:31.219794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.813 [2024-10-07 09:27:31.219827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.219920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.813 [2024-10-07 09:27:31.219936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.220013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.813 [2024-10-07 09:27:31.220032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.813 #12 NEW cov: 12412 ft: 14108 corp: 5/67b lim: 25 exec/s: 0 rss: 74Mb L: 15/23 MS: 1 EraseBytes- 00:08:35.813 [2024-10-07 09:27:31.290330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.813 [2024-10-07 09:27:31.290358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.290447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:35.813 [2024-10-07 09:27:31.290464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.290545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:35.813 [2024-10-07 09:27:31.290560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.813 [2024-10-07 09:27:31.290649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:35.813 [2024-10-07 09:27:31.290669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.813 #13 NEW cov: 12412 ft: 14181 corp: 6/89b lim: 25 exec/s: 0 rss: 74Mb L: 22/23 MS: 1 ChangeByte- 00:08:35.813 [2024-10-07 09:27:31.339743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:35.813 [2024-10-07 09:27:31.339770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.073 #14 NEW cov: 12412 ft: 14344 corp: 7/95b lim: 25 exec/s: 0 rss: 74Mb L: 6/23 MS: 1 ShuffleBytes- 00:08:36.073 [2024-10-07 09:27:31.410912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.073 [2024-10-07 09:27:31.410942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.411015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.073 [2024-10-07 09:27:31.411033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.411124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.073 [2024-10-07 09:27:31.411142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.411231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.073 [2024-10-07 09:27:31.411251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.073 #15 NEW cov: 12412 ft: 14482 corp: 8/115b lim: 25 exec/s: 0 rss: 74Mb L: 20/23 MS: 1 InsertRepeatedBytes- 00:08:36.073 [2024-10-07 09:27:31.481202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.073 [2024-10-07 09:27:31.481233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.481289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.073 [2024-10-07 09:27:31.481306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.481372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.073 [2024-10-07 09:27:31.481390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.481487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.073 [2024-10-07 09:27:31.481504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.073 #16 NEW cov: 12412 ft: 14539 corp: 9/137b lim: 25 exec/s: 0 rss: 74Mb L: 22/23 MS: 1 ShuffleBytes- 00:08:36.073 [2024-10-07 09:27:31.551301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.073 [2024-10-07 09:27:31.551329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.551413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.073 [2024-10-07 09:27:31.551429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.551508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.073 [2024-10-07 09:27:31.551522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.551617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.073 [2024-10-07 09:27:31.551635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.073 #17 NEW cov: 12412 ft: 14557 corp: 10/159b lim: 25 exec/s: 0 rss: 74Mb L: 22/23 MS: 1 InsertRepeatedBytes- 00:08:36.073 [2024-10-07 09:27:31.601539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.073 [2024-10-07 09:27:31.601568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.073 [2024-10-07 09:27:31.601638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.073 [2024-10-07 09:27:31.601659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.074 [2024-10-07 09:27:31.601724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.074 [2024-10-07 09:27:31.601744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.074 [2024-10-07 09:27:31.601839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.074 [2024-10-07 09:27:31.601873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.074 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:36.074 #18 NEW cov: 12435 ft: 14684 corp: 11/182b lim: 25 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 InsertByte- 00:08:36.333 [2024-10-07 09:27:31.651164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.333 [2024-10-07 09:27:31.651193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.333 #19 NEW cov: 12435 ft: 14699 corp: 12/188b lim: 25 exec/s: 0 rss: 74Mb L: 6/23 MS: 1 ChangeByte- 00:08:36.333 [2024-10-07 09:27:31.721573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.333 [2024-10-07 09:27:31.721601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.333 #20 NEW cov: 12435 ft: 14710 corp: 13/194b lim: 25 exec/s: 20 rss: 74Mb L: 6/23 MS: 1 ChangeByte- 00:08:36.333 [2024-10-07 09:27:31.771719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.333 [2024-10-07 09:27:31.771746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.333 #21 NEW cov: 12435 ft: 14740 corp: 14/200b lim: 25 exec/s: 21 rss: 74Mb L: 6/23 MS: 1 ChangeBinInt- 00:08:36.333 [2024-10-07 09:27:31.842740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.333 [2024-10-07 09:27:31.842771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.333 [2024-10-07 09:27:31.842859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.333 [2024-10-07 09:27:31.842879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.333 [2024-10-07 09:27:31.842946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.333 [2024-10-07 09:27:31.842962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.333 [2024-10-07 09:27:31.843056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.333 [2024-10-07 09:27:31.843073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.333 #22 NEW cov: 12435 ft: 14756 corp: 15/222b lim: 25 exec/s: 22 rss: 74Mb L: 22/23 MS: 1 ChangeByte- 00:08:36.593 [2024-10-07 09:27:31.913209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.593 [2024-10-07 09:27:31.913241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:31.913320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.593 [2024-10-07 09:27:31.913341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:31.913411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.593 [2024-10-07 09:27:31.913427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:31.913511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.593 [2024-10-07 09:27:31.913530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.593 #23 NEW cov: 12435 ft: 14794 corp: 16/244b lim: 25 exec/s: 23 rss: 75Mb L: 22/23 MS: 1 ChangeBinInt- 00:08:36.593 [2024-10-07 09:27:31.963002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.593 [2024-10-07 09:27:31.963032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:31.963099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.593 [2024-10-07 09:27:31.963119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.593 #27 NEW cov: 12435 ft: 15016 corp: 17/255b lim: 25 exec/s: 27 rss: 75Mb L: 11/23 MS: 4 ChangeByte-ChangeByte-ShuffleBytes-CrossOver- 00:08:36.593 [2024-10-07 09:27:32.023891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.593 [2024-10-07 09:27:32.023921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:32.024016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.593 [2024-10-07 09:27:32.024034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:32.024114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.593 [2024-10-07 09:27:32.024132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:32.024228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.593 [2024-10-07 09:27:32.024246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.593 #28 NEW cov: 12435 ft: 15040 corp: 18/278b lim: 25 exec/s: 28 rss: 75Mb L: 23/23 MS: 1 ShuffleBytes- 00:08:36.593 [2024-10-07 09:27:32.083667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.593 [2024-10-07 09:27:32.083700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.593 [2024-10-07 09:27:32.083773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.593 [2024-10-07 09:27:32.083793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.593 #29 NEW cov: 12435 ft: 15096 corp: 19/291b lim: 25 exec/s: 29 rss: 75Mb L: 13/23 MS: 1 EraseBytes- 00:08:36.852 [2024-10-07 09:27:32.163760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.853 [2024-10-07 09:27:32.163790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.853 #30 NEW cov: 12435 ft: 15108 corp: 20/298b lim: 25 exec/s: 30 rss: 75Mb L: 7/23 MS: 1 InsertByte- 00:08:36.853 [2024-10-07 09:27:32.234956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.853 [2024-10-07 09:27:32.234985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.235076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.853 [2024-10-07 09:27:32.235095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.235173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.853 [2024-10-07 09:27:32.235189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.235273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.853 [2024-10-07 09:27:32.235289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.853 #31 NEW cov: 12435 ft: 15137 corp: 21/320b lim: 25 exec/s: 31 rss: 75Mb L: 22/23 MS: 1 ChangeBit- 00:08:36.853 [2024-10-07 09:27:32.305425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.853 [2024-10-07 09:27:32.305455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.305541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.853 [2024-10-07 09:27:32.305561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.305625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.853 [2024-10-07 09:27:32.305642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.305737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.853 [2024-10-07 09:27:32.305755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:36.853 #32 NEW cov: 12435 ft: 15186 corp: 22/342b lim: 25 exec/s: 32 rss: 75Mb L: 22/23 MS: 1 ChangeBinInt- 00:08:36.853 [2024-10-07 09:27:32.375830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:36.853 [2024-10-07 09:27:32.375857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.376000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:36.853 [2024-10-07 09:27:32.376020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.376114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:36.853 [2024-10-07 09:27:32.376134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.853 [2024-10-07 09:27:32.376219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:36.853 [2024-10-07 09:27:32.376236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.112 #33 NEW cov: 12435 ft: 15203 corp: 23/365b lim: 25 exec/s: 33 rss: 75Mb L: 23/23 MS: 1 InsertByte- 00:08:37.113 [2024-10-07 09:27:32.446252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.113 [2024-10-07 09:27:32.446281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.113 [2024-10-07 09:27:32.446376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.113 [2024-10-07 09:27:32.446392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.113 [2024-10-07 09:27:32.446474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.113 [2024-10-07 09:27:32.446495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.113 [2024-10-07 09:27:32.446583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:37.113 [2024-10-07 09:27:32.446599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.113 #34 NEW cov: 12435 ft: 15217 corp: 24/388b lim: 25 exec/s: 34 rss: 75Mb L: 23/23 MS: 1 ShuffleBytes- 00:08:37.113 [2024-10-07 09:27:32.515789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.113 [2024-10-07 09:27:32.515819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.113 #35 NEW cov: 12435 ft: 15231 corp: 25/394b lim: 25 exec/s: 35 rss: 75Mb L: 6/23 MS: 1 ChangeBit- 00:08:37.113 [2024-10-07 09:27:32.586863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.113 [2024-10-07 09:27:32.586892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.113 [2024-10-07 09:27:32.586991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.113 [2024-10-07 09:27:32.587011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.113 [2024-10-07 09:27:32.587092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:37.113 [2024-10-07 09:27:32.587108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.113 [2024-10-07 09:27:32.587201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:37.113 [2024-10-07 09:27:32.587218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.113 #36 NEW cov: 12435 ft: 15242 corp: 26/417b lim: 25 exec/s: 36 rss: 75Mb L: 23/23 MS: 1 ShuffleBytes- 00:08:37.113 [2024-10-07 09:27:32.636330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.113 [2024-10-07 09:27:32.636357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.113 #37 NEW cov: 12435 ft: 15244 corp: 27/424b lim: 25 exec/s: 37 rss: 75Mb L: 7/23 MS: 1 InsertByte- 00:08:37.372 [2024-10-07 09:27:32.687090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:37.372 [2024-10-07 09:27:32.687118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.372 [2024-10-07 09:27:32.687182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:37.372 [2024-10-07 09:27:32.687197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.372 #38 NEW cov: 12435 ft: 15256 corp: 28/437b lim: 25 exec/s: 19 rss: 75Mb L: 13/23 MS: 1 CrossOver- 00:08:37.372 #38 DONE cov: 12435 ft: 15256 corp: 28/437b lim: 25 exec/s: 19 rss: 75Mb 00:08:37.372 Done 38 runs in 2 second(s) 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:37.372 09:27:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:37.372 [2024-10-07 09:27:32.913621] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:37.372 [2024-10-07 09:27:32.913693] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495854 ] 00:08:37.940 [2024-10-07 09:27:33.232404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.941 [2024-10-07 09:27:33.321773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.941 [2024-10-07 09:27:33.380787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.941 [2024-10-07 09:27:33.397017] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:37.941 INFO: Running with entropic power schedule (0xFF, 100). 00:08:37.941 INFO: Seed: 1004188319 00:08:37.941 INFO: Loaded 1 modules (384097 inline 8-bit counters): 384097 [0x2be68cc, 0x2c4452d), 00:08:37.941 INFO: Loaded 1 PC tables (384097 PCs): 384097 [0x2c44530,0x3220b40), 00:08:37.941 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:37.941 INFO: A corpus is not provided, starting from an empty corpus 00:08:37.941 #2 INITED exec/s: 0 rss: 67Mb 00:08:37.941 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:37.941 This may also happen if the target rejected all inputs we tried so far 00:08:37.941 [2024-10-07 09:27:33.446547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.941 [2024-10-07 09:27:33.446579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.941 [2024-10-07 09:27:33.446620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.941 [2024-10-07 09:27:33.446637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.941 [2024-10-07 09:27:33.446692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.941 [2024-10-07 09:27:33.446709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.510 NEW_FUNC[1/715]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:38.510 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:38.510 #17 NEW cov: 12280 ft: 12277 corp: 2/62b lim: 100 exec/s: 0 rss: 74Mb L: 61/61 MS: 5 CopyPart-EraseBytes-InsertByte-ChangeBit-InsertRepeatedBytes- 00:08:38.510 [2024-10-07 09:27:33.787242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.787281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.787339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.787356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.510 NEW_FUNC[1/1]: 0x1f8f2a8 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:957 00:08:38.510 #18 NEW cov: 12394 ft: 13284 corp: 3/119b lim: 100 exec/s: 0 rss: 74Mb L: 57/61 MS: 1 EraseBytes- 00:08:38.510 [2024-10-07 09:27:33.847664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.847694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.847757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.847773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.847833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.847849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.847905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.847921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.510 #23 NEW cov: 12400 ft: 13828 corp: 4/210b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 5 ShuffleBytes-CopyPart-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:38.510 [2024-10-07 09:27:33.887600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.887630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.887670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.887686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.887742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.887756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.510 #24 NEW cov: 12485 ft: 14067 corp: 5/271b lim: 100 exec/s: 0 rss: 74Mb L: 61/91 MS: 1 ChangeBit- 00:08:38.510 [2024-10-07 09:27:33.927682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289300471134023663 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.927710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.927758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.927774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.510 [2024-10-07 09:27:33.927829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.510 [2024-10-07 09:27:33.927846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.510 #25 NEW cov: 12485 ft: 14248 corp: 6/332b lim: 100 exec/s: 0 rss: 74Mb L: 61/91 MS: 1 ChangeByte- 00:08:38.511 [2024-10-07 09:27:33.967785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:33.967820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.511 [2024-10-07 09:27:33.967872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:33.967888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.511 [2024-10-07 09:27:33.967942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17234976637795168239 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:33.967956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.511 #26 NEW cov: 12485 ft: 14285 corp: 7/393b lim: 100 exec/s: 0 rss: 74Mb L: 61/91 MS: 1 ChangeByte- 00:08:38.511 [2024-10-07 09:27:34.028141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:34.028169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.511 [2024-10-07 09:27:34.028219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:34.028235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.511 [2024-10-07 09:27:34.028288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:34.028304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.511 [2024-10-07 09:27:34.028358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.511 [2024-10-07 09:27:34.028373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.511 #27 NEW cov: 12485 ft: 14315 corp: 8/484b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 ChangeBinInt- 00:08:38.770 [2024-10-07 09:27:34.088025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.088053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.088092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61416 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.088108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.770 #28 NEW cov: 12485 ft: 14470 corp: 9/528b lim: 100 exec/s: 0 rss: 74Mb L: 44/91 MS: 1 EraseBytes- 00:08:38.770 [2024-10-07 09:27:34.128400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.128427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.128478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.128494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.128552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.128567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.128620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.128636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.770 #29 NEW cov: 12485 ft: 14490 corp: 10/619b lim: 100 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 ShuffleBytes- 00:08:38.770 [2024-10-07 09:27:34.188535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.188565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.188617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.188633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.188688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221929416390 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.188704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.188760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.188775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.770 #30 NEW cov: 12485 ft: 14558 corp: 11/713b lim: 100 exec/s: 0 rss: 74Mb L: 94/94 MS: 1 CopyPart- 00:08:38.770 [2024-10-07 09:27:34.248587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.248615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.248661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61360 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.248677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.770 [2024-10-07 09:27:34.248733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17234976637795168239 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.770 [2024-10-07 09:27:34.248750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.770 #31 NEW cov: 12485 ft: 14616 corp: 12/774b lim: 100 exec/s: 0 rss: 74Mb L: 61/94 MS: 1 ChangeBit- 00:08:38.770 [2024-10-07 09:27:34.308926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.771 [2024-10-07 09:27:34.308953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.771 [2024-10-07 09:27:34.309010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.771 [2024-10-07 09:27:34.309024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.771 [2024-10-07 09:27:34.309080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221929416390 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.771 [2024-10-07 09:27:34.309095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.771 [2024-10-07 09:27:34.309151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.771 [2024-10-07 09:27:34.309166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.030 NEW_FUNC[1/1]: 0x1bf7d88 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:39.030 #32 NEW cov: 12508 ft: 14658 corp: 13/861b lim: 100 exec/s: 0 rss: 74Mb L: 87/94 MS: 1 EraseBytes- 00:08:39.030 [2024-10-07 09:27:34.369079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.030 [2024-10-07 09:27:34.369107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.030 [2024-10-07 09:27:34.369163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4595579026818909894 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.030 [2024-10-07 09:27:34.369178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.030 [2024-10-07 09:27:34.369250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.030 [2024-10-07 09:27:34.369266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.030 [2024-10-07 09:27:34.369322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.030 [2024-10-07 09:27:34.369338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.031 #33 NEW cov: 12508 ft: 14678 corp: 14/952b lim: 100 exec/s: 0 rss: 74Mb L: 91/94 MS: 1 ChangeByte- 00:08:39.031 [2024-10-07 09:27:34.408881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289085873093079023 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.408908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.408946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61416 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.408962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.031 #34 NEW cov: 12508 ft: 14773 corp: 15/996b lim: 100 exec/s: 34 rss: 75Mb L: 44/94 MS: 1 ChangeBinInt- 00:08:39.031 [2024-10-07 09:27:34.469187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61249 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.469216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.469262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.469278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.469334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.469352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.031 #35 NEW cov: 12508 ft: 14834 corp: 16/1058b lim: 100 exec/s: 35 rss: 75Mb L: 62/94 MS: 1 InsertByte- 00:08:39.031 [2024-10-07 09:27:34.509428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323135665081937903 len:199 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.509458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.509509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.509525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.509580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323190394706642630 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.509595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.509650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.509665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.031 #36 NEW cov: 12508 ft: 14845 corp: 17/1157b lim: 100 exec/s: 36 rss: 75Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:08:39.031 [2024-10-07 09:27:34.549423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.549452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.549488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.549505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.549560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.549577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.031 #37 NEW cov: 12508 ft: 14856 corp: 18/1222b lim: 100 exec/s: 37 rss: 75Mb L: 65/99 MS: 1 CMP- DE: "\377\377\377\327"- 00:08:39.031 [2024-10-07 09:27:34.589540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.589570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.589609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308299800559 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.589625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.031 [2024-10-07 09:27:34.589682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.031 [2024-10-07 09:27:34.589698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.291 #38 NEW cov: 12508 ft: 14867 corp: 19/1291b lim: 100 exec/s: 38 rss: 75Mb L: 69/99 MS: 1 CopyPart- 00:08:39.291 [2024-10-07 09:27:34.649669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289300471134023663 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.649697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.649735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.649752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.649808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.649844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.291 #39 NEW cov: 12508 ft: 14897 corp: 20/1352b lim: 100 exec/s: 39 rss: 75Mb L: 61/99 MS: 1 ChangeByte- 00:08:39.291 [2024-10-07 09:27:34.709674] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.709704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.709742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308299800559 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.709759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.291 #40 NEW cov: 12508 ft: 14932 corp: 21/1395b lim: 100 exec/s: 40 rss: 75Mb L: 43/99 MS: 1 CrossOver- 00:08:39.291 [2024-10-07 09:27:34.769994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289300471134023663 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.770022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.770068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.770084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.770139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.770153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.291 #41 NEW cov: 12508 ft: 14999 corp: 22/1457b lim: 100 exec/s: 41 rss: 75Mb L: 62/99 MS: 1 InsertByte- 00:08:39.291 [2024-10-07 09:27:34.830331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.830359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.830415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.830431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.830484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.830504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.291 [2024-10-07 09:27:34.830557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181212 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.291 [2024-10-07 09:27:34.830573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.291 #42 NEW cov: 12508 ft: 15017 corp: 23/1549b lim: 100 exec/s: 42 rss: 75Mb L: 92/99 MS: 1 InsertByte- 00:08:39.551 [2024-10-07 09:27:34.870425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61249 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.870452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.551 [2024-10-07 09:27:34.870521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.870537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.551 [2024-10-07 09:27:34.870591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.870607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.551 [2024-10-07 09:27:34.870662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.870677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.551 #43 NEW cov: 12508 ft: 15046 corp: 24/1636b lim: 100 exec/s: 43 rss: 75Mb L: 87/99 MS: 1 CrossOver- 00:08:39.551 [2024-10-07 09:27:34.930446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069497417711 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.930473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.551 [2024-10-07 09:27:34.930517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14118767170631495663 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.930533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.551 [2024-10-07 09:27:34.930603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.930619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.551 #44 NEW cov: 12508 ft: 15062 corp: 25/1709b lim: 100 exec/s: 44 rss: 75Mb L: 73/99 MS: 1 InsertRepeatedBytes- 00:08:39.551 [2024-10-07 09:27:34.990468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.990495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.551 [2024-10-07 09:27:34.990535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.551 [2024-10-07 09:27:34.990550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.551 #45 NEW cov: 12508 ft: 15070 corp: 26/1767b lim: 100 exec/s: 45 rss: 75Mb L: 58/99 MS: 1 CopyPart- 00:08:39.552 [2024-10-07 09:27:35.050649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.552 [2024-10-07 09:27:35.050676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.552 [2024-10-07 09:27:35.050716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308299800559 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.552 [2024-10-07 09:27:35.050732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.552 #46 NEW cov: 12508 ft: 15109 corp: 27/1810b lim: 100 exec/s: 46 rss: 75Mb L: 43/99 MS: 1 PersAutoDict- DE: "\377\377\377\327"- 00:08:39.552 [2024-10-07 09:27:35.110904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.552 [2024-10-07 09:27:35.110932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.552 [2024-10-07 09:27:35.110974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308299800559 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.552 [2024-10-07 09:27:35.110990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.811 #47 NEW cov: 12508 ft: 15162 corp: 28/1853b lim: 100 exec/s: 47 rss: 75Mb L: 43/99 MS: 1 PersAutoDict- DE: "\377\377\377\327"- 00:08:39.811 [2024-10-07 09:27:35.150924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.811 [2024-10-07 09:27:35.150951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.811 [2024-10-07 09:27:35.150990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308299800559 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.811 [2024-10-07 09:27:35.151007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.811 #48 NEW cov: 12508 ft: 15202 corp: 29/1897b lim: 100 exec/s: 48 rss: 75Mb L: 44/99 MS: 1 InsertByte- 00:08:39.811 [2024-10-07 09:27:35.191352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.811 [2024-10-07 09:27:35.191378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.811 [2024-10-07 09:27:35.191435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.811 [2024-10-07 09:27:35.191452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.811 [2024-10-07 09:27:35.191521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.811 [2024-10-07 09:27:35.191537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.811 [2024-10-07 09:27:35.191594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181212 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.811 [2024-10-07 09:27:35.191610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.811 #49 NEW cov: 12508 ft: 15209 corp: 30/1989b lim: 100 exec/s: 49 rss: 75Mb L: 92/99 MS: 1 ChangeASCIIInt- 00:08:39.812 [2024-10-07 09:27:35.251506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.251536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.251585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181252 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.251601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.251653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221929416390 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.251669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.251722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.251738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.812 #50 NEW cov: 12508 ft: 15247 corp: 31/2076b lim: 100 exec/s: 50 rss: 75Mb L: 87/99 MS: 1 ChangeBit- 00:08:39.812 [2024-10-07 09:27:35.311741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.311768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.311827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14323354221939181252 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.311843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.311897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221929416390 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.311912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.311967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.311982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.812 #51 NEW cov: 12508 ft: 15277 corp: 32/2173b lim: 100 exec/s: 51 rss: 76Mb L: 97/99 MS: 1 InsertRepeatedBytes- 00:08:39.812 [2024-10-07 09:27:35.371776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17289301304357679087 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.371804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.371851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17289301308299800559 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.371868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.812 [2024-10-07 09:27:35.371924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17289301308300324847 len:61424 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:39.812 [2024-10-07 09:27:35.371940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.072 #52 NEW cov: 12508 ft: 15290 corp: 33/2242b lim: 100 exec/s: 52 rss: 76Mb L: 69/99 MS: 1 ChangeByte- 00:08:40.072 [2024-10-07 09:27:35.411970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14323354218787762159 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.072 [2024-10-07 09:27:35.412000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.072 [2024-10-07 09:27:35.412041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9404222470095292100 len:33411 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.072 [2024-10-07 09:27:35.412057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.072 [2024-10-07 09:27:35.412109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14323354221939181254 len:12743 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.072 [2024-10-07 09:27:35.412124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.072 [2024-10-07 09:27:35.412179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14323354221939181254 len:50887 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:40.072 [2024-10-07 09:27:35.412195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:40.072 #53 NEW cov: 12508 ft: 15295 corp: 34/2338b lim: 100 exec/s: 26 rss: 76Mb L: 96/99 MS: 1 InsertRepeatedBytes- 00:08:40.072 #53 DONE cov: 12508 ft: 15295 corp: 34/2338b lim: 100 exec/s: 26 rss: 76Mb 00:08:40.072 ###### Recommended dictionary. ###### 00:08:40.072 "\377\377\377\327" # Uses: 2 00:08:40.072 ###### End of recommended dictionary. ###### 00:08:40.072 Done 53 runs in 2 second(s) 00:08:40.072 09:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:40.072 09:27:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.072 09:27:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.072 09:27:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:40.072 00:08:40.072 real 1m8.366s 00:08:40.072 user 1m41.612s 00:08:40.072 sys 0m10.218s 00:08:40.072 09:27:35 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.072 09:27:35 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 ************************************ 00:08:40.072 END TEST nvmf_llvm_fuzz 00:08:40.072 ************************************ 00:08:40.072 09:27:35 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:40.072 09:27:35 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:40.072 09:27:35 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:40.072 09:27:35 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.072 09:27:35 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.072 09:27:35 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:40.333 ************************************ 00:08:40.333 START TEST vfio_llvm_fuzz 00:08:40.333 ************************************ 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:40.333 * Looking for test storage... 00:08:40.333 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.333 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:40.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.334 --rc genhtml_branch_coverage=1 00:08:40.334 --rc genhtml_function_coverage=1 00:08:40.334 --rc genhtml_legend=1 00:08:40.334 --rc geninfo_all_blocks=1 00:08:40.334 --rc geninfo_unexecuted_blocks=1 00:08:40.334 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.334 ' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:40.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.334 --rc genhtml_branch_coverage=1 00:08:40.334 --rc genhtml_function_coverage=1 00:08:40.334 --rc genhtml_legend=1 00:08:40.334 --rc geninfo_all_blocks=1 00:08:40.334 --rc geninfo_unexecuted_blocks=1 00:08:40.334 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.334 ' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:40.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.334 --rc genhtml_branch_coverage=1 00:08:40.334 --rc genhtml_function_coverage=1 00:08:40.334 --rc genhtml_legend=1 00:08:40.334 --rc geninfo_all_blocks=1 00:08:40.334 --rc geninfo_unexecuted_blocks=1 00:08:40.334 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.334 ' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:40.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.334 --rc genhtml_branch_coverage=1 00:08:40.334 --rc genhtml_function_coverage=1 00:08:40.334 --rc genhtml_legend=1 00:08:40.334 --rc geninfo_all_blocks=1 00:08:40.334 --rc geninfo_unexecuted_blocks=1 00:08:40.334 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.334 ' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:08:40.334 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:40.335 #define SPDK_CONFIG_H 00:08:40.335 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:40.335 #define SPDK_CONFIG_APPS 1 00:08:40.335 #define SPDK_CONFIG_ARCH native 00:08:40.335 #undef SPDK_CONFIG_ASAN 00:08:40.335 #undef SPDK_CONFIG_AVAHI 00:08:40.335 #undef SPDK_CONFIG_CET 00:08:40.335 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:40.335 #define SPDK_CONFIG_COVERAGE 1 00:08:40.335 #define SPDK_CONFIG_CROSS_PREFIX 00:08:40.335 #undef SPDK_CONFIG_CRYPTO 00:08:40.335 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:40.335 #undef SPDK_CONFIG_CUSTOMOCF 00:08:40.335 #undef SPDK_CONFIG_DAOS 00:08:40.335 #define SPDK_CONFIG_DAOS_DIR 00:08:40.335 #define SPDK_CONFIG_DEBUG 1 00:08:40.335 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:40.335 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:40.335 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:40.335 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:40.335 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:40.335 #undef SPDK_CONFIG_DPDK_UADK 00:08:40.335 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:40.335 #define SPDK_CONFIG_EXAMPLES 1 00:08:40.335 #undef SPDK_CONFIG_FC 00:08:40.335 #define SPDK_CONFIG_FC_PATH 00:08:40.335 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:40.335 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:40.335 #define SPDK_CONFIG_FSDEV 1 00:08:40.335 #undef SPDK_CONFIG_FUSE 00:08:40.335 #define SPDK_CONFIG_FUZZER 1 00:08:40.335 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:40.335 #undef SPDK_CONFIG_GOLANG 00:08:40.335 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:40.335 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:40.335 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:40.335 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:40.335 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:40.335 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:40.335 #undef SPDK_CONFIG_HAVE_LZ4 00:08:40.335 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:40.335 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:40.335 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:40.335 #define SPDK_CONFIG_IDXD 1 00:08:40.335 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:40.335 #undef SPDK_CONFIG_IPSEC_MB 00:08:40.335 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:40.335 #define SPDK_CONFIG_ISAL 1 00:08:40.335 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:40.335 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:40.335 #define SPDK_CONFIG_LIBDIR 00:08:40.335 #undef SPDK_CONFIG_LTO 00:08:40.335 #define SPDK_CONFIG_MAX_LCORES 128 00:08:40.335 #define SPDK_CONFIG_NVME_CUSE 1 00:08:40.335 #undef SPDK_CONFIG_OCF 00:08:40.335 #define SPDK_CONFIG_OCF_PATH 00:08:40.335 #define SPDK_CONFIG_OPENSSL_PATH 00:08:40.335 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:40.335 #define SPDK_CONFIG_PGO_DIR 00:08:40.335 #undef SPDK_CONFIG_PGO_USE 00:08:40.335 #define SPDK_CONFIG_PREFIX /usr/local 00:08:40.335 #undef SPDK_CONFIG_RAID5F 00:08:40.335 #undef SPDK_CONFIG_RBD 00:08:40.335 #define SPDK_CONFIG_RDMA 1 00:08:40.335 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:40.335 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:40.335 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:40.335 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:40.335 #undef SPDK_CONFIG_SHARED 00:08:40.335 #undef SPDK_CONFIG_SMA 00:08:40.335 #define SPDK_CONFIG_TESTS 1 00:08:40.335 #undef SPDK_CONFIG_TSAN 00:08:40.335 #define SPDK_CONFIG_UBLK 1 00:08:40.335 #define SPDK_CONFIG_UBSAN 1 00:08:40.335 #undef SPDK_CONFIG_UNIT_TESTS 00:08:40.335 #undef SPDK_CONFIG_URING 00:08:40.335 #define SPDK_CONFIG_URING_PATH 00:08:40.335 #undef SPDK_CONFIG_URING_ZNS 00:08:40.335 #undef SPDK_CONFIG_USDT 00:08:40.335 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:40.335 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:40.335 #define SPDK_CONFIG_VFIO_USER 1 00:08:40.335 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:40.335 #define SPDK_CONFIG_VHOST 1 00:08:40.335 #define SPDK_CONFIG_VIRTIO 1 00:08:40.335 #undef SPDK_CONFIG_VTUNE 00:08:40.335 #define SPDK_CONFIG_VTUNE_DIR 00:08:40.335 #define SPDK_CONFIG_WERROR 1 00:08:40.335 #define SPDK_CONFIG_WPDK_DIR 00:08:40.335 #undef SPDK_CONFIG_XNVME 00:08:40.335 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:40.335 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:40.336 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:40.597 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:40.598 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 496249 ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 496249 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.SlOqSa 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.SlOqSa/tests/vfio /tmp/spdk.SlOqSa 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86493052928 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500294656 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8007241728 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47246716928 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894159872 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5902336 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249645568 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250149376 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=503808 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:08:40.599 * Looking for test storage... 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86493052928 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10221834240 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:40.599 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:40.599 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:08:40.600 09:27:35 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:40.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.600 --rc genhtml_branch_coverage=1 00:08:40.600 --rc genhtml_function_coverage=1 00:08:40.600 --rc genhtml_legend=1 00:08:40.600 --rc geninfo_all_blocks=1 00:08:40.600 --rc geninfo_unexecuted_blocks=1 00:08:40.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.600 ' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:40.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.600 --rc genhtml_branch_coverage=1 00:08:40.600 --rc genhtml_function_coverage=1 00:08:40.600 --rc genhtml_legend=1 00:08:40.600 --rc geninfo_all_blocks=1 00:08:40.600 --rc geninfo_unexecuted_blocks=1 00:08:40.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.600 ' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:40.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.600 --rc genhtml_branch_coverage=1 00:08:40.600 --rc genhtml_function_coverage=1 00:08:40.600 --rc genhtml_legend=1 00:08:40.600 --rc geninfo_all_blocks=1 00:08:40.600 --rc geninfo_unexecuted_blocks=1 00:08:40.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.600 ' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:40.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.600 --rc genhtml_branch_coverage=1 00:08:40.600 --rc genhtml_function_coverage=1 00:08:40.600 --rc genhtml_legend=1 00:08:40.600 --rc geninfo_all_blocks=1 00:08:40.600 --rc geninfo_unexecuted_blocks=1 00:08:40.600 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:40.600 ' 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:40.600 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:40.600 09:27:36 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:40.600 [2024-10-07 09:27:36.114700] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:40.600 [2024-10-07 09:27:36.114771] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496417 ] 00:08:40.859 [2024-10-07 09:27:36.193102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.859 [2024-10-07 09:27:36.277634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.118 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.118 INFO: Seed: 4070197430 00:08:41.118 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:41.118 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:41.118 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:41.118 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.118 #2 INITED exec/s: 0 rss: 67Mb 00:08:41.118 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.118 This may also happen if the target rejected all inputs we tried so far 00:08:41.118 [2024-10-07 09:27:36.535473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:41.687 NEW_FUNC[1/670]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:41.687 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:41.687 #17 NEW cov: 11116 ft: 11090 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 5 ChangeBit-ChangeBinInt-InsertByte-CrossOver-InsertRepeatedBytes- 00:08:41.687 NEW_FUNC[1/1]: 0x441d58 in io_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:393 00:08:41.687 #27 NEW cov: 11145 ft: 14271 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 5 InsertRepeatedBytes-ShuffleBytes-ShuffleBytes-ShuffleBytes-InsertByte- 00:08:41.687 #33 NEW cov: 11148 ft: 15467 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:08:41.946 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:41.947 #37 NEW cov: 11165 ft: 16125 corp: 5/25b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 4 EraseBytes-ChangeBinInt-ChangeBinInt-CopyPart- 00:08:41.947 #48 NEW cov: 11165 ft: 16348 corp: 6/31b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:08:42.207 #49 NEW cov: 11165 ft: 16532 corp: 7/37b lim: 6 exec/s: 49 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:42.207 #55 NEW cov: 11165 ft: 16595 corp: 8/43b lim: 6 exec/s: 55 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:08:42.467 #56 NEW cov: 11165 ft: 17672 corp: 9/49b lim: 6 exec/s: 56 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:08:42.726 #67 NEW cov: 11165 ft: 18166 corp: 10/55b lim: 6 exec/s: 67 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:08:42.726 #68 NEW cov: 11165 ft: 18710 corp: 11/61b lim: 6 exec/s: 68 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:08:42.989 #74 NEW cov: 11172 ft: 19251 corp: 12/67b lim: 6 exec/s: 74 rss: 76Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:08:43.253 #75 NEW cov: 11172 ft: 19467 corp: 13/73b lim: 6 exec/s: 37 rss: 76Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:08:43.253 #75 DONE cov: 11172 ft: 19467 corp: 13/73b lim: 6 exec/s: 37 rss: 76Mb 00:08:43.253 Done 75 runs in 2 second(s) 00:08:43.253 [2024-10-07 09:27:38.672030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:43.513 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:43.513 09:27:38 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:43.513 [2024-10-07 09:27:38.977699] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:43.513 [2024-10-07 09:27:38.977754] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496825 ] 00:08:43.513 [2024-10-07 09:27:39.052030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.774 [2024-10-07 09:27:39.136154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.034 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.034 INFO: Seed: 2637231962 00:08:44.034 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:44.034 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:44.034 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:44.034 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.034 #2 INITED exec/s: 0 rss: 67Mb 00:08:44.034 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.034 This may also happen if the target rejected all inputs we tried so far 00:08:44.034 [2024-10-07 09:27:39.398325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:44.034 [2024-10-07 09:27:39.443841] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.034 [2024-10-07 09:27:39.443869] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.034 [2024-10-07 09:27:39.443905] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.605 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:44.605 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:44.605 #56 NEW cov: 11117 ft: 11078 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 CrossOver-ChangeBit-CrossOver-InsertByte- 00:08:44.605 [2024-10-07 09:27:39.933154] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.605 [2024-10-07 09:27:39.933192] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.605 [2024-10-07 09:27:39.933212] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.605 #57 NEW cov: 11131 ft: 14321 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:08:44.605 [2024-10-07 09:27:40.117027] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.605 [2024-10-07 09:27:40.117064] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.605 [2024-10-07 09:27:40.117101] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.866 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:44.866 #63 NEW cov: 11151 ft: 15297 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:08:44.866 [2024-10-07 09:27:40.300634] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:44.866 [2024-10-07 09:27:40.300660] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:44.866 [2024-10-07 09:27:40.300695] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:44.866 #64 NEW cov: 11151 ft: 15602 corp: 5/17b lim: 4 exec/s: 64 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:08:45.126 [2024-10-07 09:27:40.472068] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.126 [2024-10-07 09:27:40.472091] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.126 [2024-10-07 09:27:40.472125] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.126 #65 NEW cov: 11151 ft: 16836 corp: 6/21b lim: 4 exec/s: 65 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:08:45.126 [2024-10-07 09:27:40.653384] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.126 [2024-10-07 09:27:40.653409] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.126 [2024-10-07 09:27:40.653442] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.386 #66 NEW cov: 11151 ft: 17163 corp: 7/25b lim: 4 exec/s: 66 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:08:45.386 [2024-10-07 09:27:40.826333] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.386 [2024-10-07 09:27:40.826357] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.386 [2024-10-07 09:27:40.826391] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.386 #67 NEW cov: 11151 ft: 17246 corp: 8/29b lim: 4 exec/s: 67 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:45.646 [2024-10-07 09:27:41.008963] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.646 [2024-10-07 09:27:41.008987] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.646 [2024-10-07 09:27:41.009006] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.646 #68 NEW cov: 11151 ft: 17407 corp: 9/33b lim: 4 exec/s: 68 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:45.646 [2024-10-07 09:27:41.179240] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.646 [2024-10-07 09:27:41.179262] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.646 [2024-10-07 09:27:41.179284] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.905 #69 NEW cov: 11158 ft: 17525 corp: 10/37b lim: 4 exec/s: 69 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:08:45.905 [2024-10-07 09:27:41.347549] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:45.905 [2024-10-07 09:27:41.347574] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:45.905 [2024-10-07 09:27:41.347594] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:45.905 #75 NEW cov: 11158 ft: 17891 corp: 11/41b lim: 4 exec/s: 37 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:08:45.905 #75 DONE cov: 11158 ft: 17891 corp: 11/41b lim: 4 exec/s: 37 rss: 77Mb 00:08:45.905 Done 75 runs in 2 second(s) 00:08:46.165 [2024-10-07 09:27:41.476050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:46.426 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:46.426 09:27:41 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:46.426 [2024-10-07 09:27:41.794598] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:46.426 [2024-10-07 09:27:41.794663] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497185 ] 00:08:46.426 [2024-10-07 09:27:41.869501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.426 [2024-10-07 09:27:41.953209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.686 INFO: Running with entropic power schedule (0xFF, 100). 00:08:46.686 INFO: Seed: 1149274702 00:08:46.686 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:46.686 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:46.686 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:46.686 INFO: A corpus is not provided, starting from an empty corpus 00:08:46.686 #2 INITED exec/s: 0 rss: 68Mb 00:08:46.686 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:46.686 This may also happen if the target rejected all inputs we tried so far 00:08:46.686 [2024-10-07 09:27:42.201821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:46.946 [2024-10-07 09:27:42.254674] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.205 NEW_FUNC[1/670]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:47.206 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:47.206 #25 NEW cov: 11102 ft: 10697 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 3 ChangeBit-InsertRepeatedBytes-InsertByte- 00:08:47.206 [2024-10-07 09:27:42.736478] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.466 NEW_FUNC[1/2]: 0x1edd7b8 in accel_comp_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/accel/accel_sw.c:683 00:08:47.466 NEW_FUNC[2/2]: 0x1f0a7f8 in spdk_get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1282 00:08:47.466 #26 NEW cov: 11121 ft: 14023 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:08:47.466 [2024-10-07 09:27:42.928326] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.725 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:47.725 #52 NEW cov: 11138 ft: 15898 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:08:47.725 [2024-10-07 09:27:43.119640] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.725 #60 NEW cov: 11138 ft: 16474 corp: 5/33b lim: 8 exec/s: 60 rss: 76Mb L: 8/8 MS: 3 EraseBytes-CrossOver-CrossOver- 00:08:47.984 [2024-10-07 09:27:43.309120] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:47.985 #61 NEW cov: 11138 ft: 16876 corp: 6/41b lim: 8 exec/s: 61 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:47.985 [2024-10-07 09:27:43.487823] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.244 #77 NEW cov: 11138 ft: 17857 corp: 7/49b lim: 8 exec/s: 77 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:08:48.244 [2024-10-07 09:27:43.668219] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.244 #78 NEW cov: 11138 ft: 17985 corp: 8/57b lim: 8 exec/s: 78 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:08:48.503 [2024-10-07 09:27:43.846511] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.503 #89 NEW cov: 11138 ft: 18171 corp: 9/65b lim: 8 exec/s: 89 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:08:48.503 [2024-10-07 09:27:44.025630] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.763 #90 NEW cov: 11145 ft: 18463 corp: 10/73b lim: 8 exec/s: 90 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:08:48.763 [2024-10-07 09:27:44.216651] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:48.763 #91 NEW cov: 11145 ft: 18528 corp: 11/81b lim: 8 exec/s: 45 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:08:48.763 #91 DONE cov: 11145 ft: 18528 corp: 11/81b lim: 8 exec/s: 45 rss: 77Mb 00:08:48.763 Done 91 runs in 2 second(s) 00:08:49.023 [2024-10-07 09:27:44.345030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:49.282 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:49.282 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:49.283 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:49.283 09:27:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:49.283 [2024-10-07 09:27:44.661875] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:49.283 [2024-10-07 09:27:44.661945] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497546 ] 00:08:49.283 [2024-10-07 09:27:44.738835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.283 [2024-10-07 09:27:44.819063] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.543 INFO: Running with entropic power schedule (0xFF, 100). 00:08:49.543 INFO: Seed: 4014257547 00:08:49.543 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:49.543 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:49.543 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:49.543 INFO: A corpus is not provided, starting from an empty corpus 00:08:49.543 #2 INITED exec/s: 0 rss: 67Mb 00:08:49.543 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:49.543 This may also happen if the target rejected all inputs we tried so far 00:08:49.543 [2024-10-07 09:27:45.062465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:50.062 NEW_FUNC[1/671]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:50.062 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:50.062 #273 NEW cov: 11108 ft: 10867 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:50.323 NEW_FUNC[1/1]: 0x1892eb8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1539 00:08:50.323 #274 NEW cov: 11126 ft: 14203 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:08:50.583 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:50.583 #275 NEW cov: 11146 ft: 14486 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:50.583 #276 NEW cov: 11146 ft: 15936 corp: 5/129b lim: 32 exec/s: 276 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:08:50.842 #282 NEW cov: 11146 ft: 16037 corp: 6/161b lim: 32 exec/s: 282 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:08:51.102 #283 NEW cov: 11146 ft: 16152 corp: 7/193b lim: 32 exec/s: 283 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:51.102 #284 NEW cov: 11146 ft: 17112 corp: 8/225b lim: 32 exec/s: 284 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:08:51.363 #290 NEW cov: 11146 ft: 17653 corp: 9/257b lim: 32 exec/s: 290 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:51.623 #291 NEW cov: 11153 ft: 18021 corp: 10/289b lim: 32 exec/s: 291 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:51.623 #292 NEW cov: 11153 ft: 18228 corp: 11/321b lim: 32 exec/s: 146 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:08:51.623 #292 DONE cov: 11153 ft: 18228 corp: 11/321b lim: 32 exec/s: 146 rss: 77Mb 00:08:51.623 Done 292 runs in 2 second(s) 00:08:51.623 [2024-10-07 09:27:47.176030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:51.884 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:51.884 09:27:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:52.144 [2024-10-07 09:27:47.460855] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:52.144 [2024-10-07 09:27:47.460910] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497905 ] 00:08:52.144 [2024-10-07 09:27:47.535954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.144 [2024-10-07 09:27:47.619451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.404 INFO: Running with entropic power schedule (0xFF, 100). 00:08:52.405 INFO: Seed: 2526314112 00:08:52.405 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:52.405 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:52.405 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:52.405 INFO: A corpus is not provided, starting from an empty corpus 00:08:52.405 #2 INITED exec/s: 0 rss: 67Mb 00:08:52.405 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:52.405 This may also happen if the target rejected all inputs we tried so far 00:08:52.405 [2024-10-07 09:27:47.876599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:52.924 NEW_FUNC[1/672]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:52.924 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:52.924 #87 NEW cov: 11113 ft: 11066 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 CrossOver-InsertRepeatedBytes-ChangeBinInt-CMP-InsertRepeatedBytes- DE: "\377\003\000\000\000\000\000\000"- 00:08:53.184 #93 NEW cov: 11127 ft: 14874 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:08:53.184 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:53.184 #94 NEW cov: 11144 ft: 16390 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:08:53.444 #95 NEW cov: 11144 ft: 16913 corp: 5/129b lim: 32 exec/s: 95 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:53.702 #96 NEW cov: 11144 ft: 17164 corp: 6/161b lim: 32 exec/s: 96 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:08:53.961 #102 NEW cov: 11144 ft: 17283 corp: 7/193b lim: 32 exec/s: 102 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:08:53.961 #103 NEW cov: 11144 ft: 17358 corp: 8/225b lim: 32 exec/s: 103 rss: 77Mb L: 32/32 MS: 1 CMP- DE: "~\000\000\000\000\000\000\000"- 00:08:54.221 #104 NEW cov: 11151 ft: 17545 corp: 9/257b lim: 32 exec/s: 104 rss: 77Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\003\000\000\000\000\000\000"- 00:08:54.545 #105 NEW cov: 11151 ft: 17869 corp: 10/289b lim: 32 exec/s: 52 rss: 77Mb L: 32/32 MS: 1 CMP- DE: "\000\000\000\020"- 00:08:54.545 #105 DONE cov: 11151 ft: 17869 corp: 10/289b lim: 32 exec/s: 52 rss: 77Mb 00:08:54.545 ###### Recommended dictionary. ###### 00:08:54.545 "\377\003\000\000\000\000\000\000" # Uses: 2 00:08:54.545 "~\000\000\000\000\000\000\000" # Uses: 0 00:08:54.545 "\000\000\000\020" # Uses: 0 00:08:54.545 ###### End of recommended dictionary. ###### 00:08:54.545 Done 105 runs in 2 second(s) 00:08:54.545 [2024-10-07 09:27:49.939012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:54.861 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:54.862 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:54.862 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:54.862 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:54.862 09:27:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:54.862 [2024-10-07 09:27:50.236944] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:54.862 [2024-10-07 09:27:50.237010] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498270 ] 00:08:54.862 [2024-10-07 09:27:50.321444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.862 [2024-10-07 09:27:50.407939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.240 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.240 INFO: Seed: 1022324607 00:08:55.240 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:55.240 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:55.240 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:55.240 INFO: A corpus is not provided, starting from an empty corpus 00:08:55.240 #2 INITED exec/s: 0 rss: 66Mb 00:08:55.240 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:55.240 This may also happen if the target rejected all inputs we tried so far 00:08:55.240 [2024-10-07 09:27:50.667585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:55.240 [2024-10-07 09:27:50.743683] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.240 [2024-10-07 09:27:50.743726] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:55.758 NEW_FUNC[1/671]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:55.758 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:55.758 #11 NEW cov: 11119 ft: 11093 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 4 InsertRepeatedBytes-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:08:55.758 [2024-10-07 09:27:51.249630] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:55.758 [2024-10-07 09:27:51.249676] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.017 NEW_FUNC[1/2]: 0x452098 in spdk_bdev_io_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/bdev_module.h:1444 00:08:56.018 NEW_FUNC[2/2]: 0x193f728 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1159 00:08:56.018 #12 NEW cov: 11140 ft: 14527 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:08:56.018 [2024-10-07 09:27:51.462122] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.018 [2024-10-07 09:27:51.462155] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.018 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:56.018 #13 NEW cov: 11157 ft: 14989 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:08:56.277 [2024-10-07 09:27:51.663387] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.277 [2024-10-07 09:27:51.663419] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.277 #14 NEW cov: 11157 ft: 15885 corp: 5/53b lim: 13 exec/s: 14 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:08:56.537 [2024-10-07 09:27:51.863719] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.537 [2024-10-07 09:27:51.863751] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.537 #15 NEW cov: 11157 ft: 16544 corp: 6/66b lim: 13 exec/s: 15 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:08:56.537 [2024-10-07 09:27:52.062829] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.537 [2024-10-07 09:27:52.062862] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:56.795 #16 NEW cov: 11157 ft: 16734 corp: 7/79b lim: 13 exec/s: 16 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:56.795 [2024-10-07 09:27:52.255681] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:56.795 [2024-10-07 09:27:52.255715] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.054 #17 NEW cov: 11157 ft: 16990 corp: 8/92b lim: 13 exec/s: 17 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:57.054 [2024-10-07 09:27:52.465242] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.054 [2024-10-07 09:27:52.465275] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.054 #23 NEW cov: 11164 ft: 17030 corp: 9/105b lim: 13 exec/s: 23 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:08:57.313 [2024-10-07 09:27:52.653888] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:57.313 [2024-10-07 09:27:52.653919] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:57.313 #24 NEW cov: 11164 ft: 17565 corp: 10/118b lim: 13 exec/s: 12 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:08:57.313 #24 DONE cov: 11164 ft: 17565 corp: 10/118b lim: 13 exec/s: 12 rss: 75Mb 00:08:57.313 Done 24 runs in 2 second(s) 00:08:57.313 [2024-10-07 09:27:52.788026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:57.572 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:57.572 09:27:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:57.572 [2024-10-07 09:27:53.110332] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:57.572 [2024-10-07 09:27:53.110400] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498639 ] 00:08:57.832 [2024-10-07 09:27:53.187285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.832 [2024-10-07 09:27:53.271540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.091 INFO: Running with entropic power schedule (0xFF, 100). 00:08:58.091 INFO: Seed: 3886323918 00:08:58.091 INFO: Loaded 1 modules (381333 inline 8-bit counters): 381333 [0x2ba70cc, 0x2c04261), 00:08:58.091 INFO: Loaded 1 PC tables (381333 PCs): 381333 [0x2c04268,0x31d5bb8), 00:08:58.091 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:58.091 INFO: A corpus is not provided, starting from an empty corpus 00:08:58.091 #2 INITED exec/s: 0 rss: 68Mb 00:08:58.091 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:58.091 This may also happen if the target rejected all inputs we tried so far 00:08:58.092 [2024-10-07 09:27:53.525825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:58.092 [2024-10-07 09:27:53.573872] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.092 [2024-10-07 09:27:53.573906] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.610 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:58.610 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:58.610 #3 NEW cov: 11107 ft: 11060 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\001%\365,V\361x\314"- 00:08:58.610 [2024-10-07 09:27:54.050483] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.610 [2024-10-07 09:27:54.050528] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.610 #4 NEW cov: 11126 ft: 14345 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 CrossOver- 00:08:58.870 [2024-10-07 09:27:54.245822] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:58.870 [2024-10-07 09:27:54.245856] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:58.870 NEW_FUNC[1/1]: 0x1bc41d8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:658 00:08:58.870 #9 NEW cov: 11143 ft: 15319 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 5 CopyPart-ChangeByte-ChangeByte-InsertRepeatedBytes-InsertByte- 00:08:59.131 [2024-10-07 09:27:54.441844] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.131 [2024-10-07 09:27:54.441877] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.131 #10 NEW cov: 11143 ft: 16092 corp: 5/37b lim: 9 exec/s: 10 rss: 76Mb L: 9/9 MS: 1 CopyPart- 00:08:59.131 [2024-10-07 09:27:54.626611] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.131 [2024-10-07 09:27:54.626641] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.391 #16 NEW cov: 11143 ft: 16738 corp: 6/46b lim: 9 exec/s: 16 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:59.391 [2024-10-07 09:27:54.820603] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.391 [2024-10-07 09:27:54.820637] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.391 #17 NEW cov: 11143 ft: 16844 corp: 7/55b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:59.650 [2024-10-07 09:27:55.005704] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.650 [2024-10-07 09:27:55.005737] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.650 #23 NEW cov: 11143 ft: 17253 corp: 8/64b lim: 9 exec/s: 23 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:08:59.650 [2024-10-07 09:27:55.211582] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.650 [2024-10-07 09:27:55.211614] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:59.909 #28 NEW cov: 11150 ft: 17618 corp: 9/73b lim: 9 exec/s: 28 rss: 76Mb L: 9/9 MS: 5 InsertRepeatedBytes-ChangeBinInt-InsertByte-InsertByte-CopyPart- 00:08:59.909 [2024-10-07 09:27:55.412555] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:59.909 [2024-10-07 09:27:55.412587] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:00.168 #34 NEW cov: 11150 ft: 17638 corp: 10/82b lim: 9 exec/s: 17 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:00.168 #34 DONE cov: 11150 ft: 17638 corp: 10/82b lim: 9 exec/s: 17 rss: 76Mb 00:09:00.168 ###### Recommended dictionary. ###### 00:09:00.168 "\001%\365,V\361x\314" # Uses: 0 00:09:00.168 ###### End of recommended dictionary. ###### 00:09:00.168 Done 34 runs in 2 second(s) 00:09:00.168 [2024-10-07 09:27:55.549027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:00.428 09:27:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:00.428 09:27:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:00.428 09:27:55 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:00.428 09:27:55 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:00.428 00:09:00.428 real 0m20.185s 00:09:00.428 user 0m27.925s 00:09:00.428 sys 0m2.036s 00:09:00.428 09:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.428 09:27:55 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 ************************************ 00:09:00.428 END TEST vfio_llvm_fuzz 00:09:00.428 ************************************ 00:09:00.428 00:09:00.428 real 1m28.881s 00:09:00.428 user 2m9.688s 00:09:00.428 sys 0m12.458s 00:09:00.428 09:27:55 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.428 09:27:55 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 ************************************ 00:09:00.428 END TEST llvm_fuzz 00:09:00.428 ************************************ 00:09:00.428 09:27:55 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:09:00.428 09:27:55 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:09:00.428 09:27:55 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:09:00.428 09:27:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.428 09:27:55 -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 09:27:55 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:09:00.428 09:27:55 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:09:00.428 09:27:55 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:09:00.428 09:27:55 -- common/autotest_common.sh@10 -- # set +x 00:09:04.626 INFO: APP EXITING 00:09:04.627 INFO: killing all VMs 00:09:04.627 INFO: killing vhost app 00:09:04.627 INFO: EXIT DONE 00:09:07.921 Waiting for block devices as requested 00:09:07.921 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:09:07.921 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:07.921 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:08.181 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:08.181 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:08.181 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:08.181 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:08.440 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:08.440 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:08.440 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:08.699 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:08.699 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:08.699 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:08.958 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:08.958 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:08.958 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:09.218 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:14.495 Cleaning 00:09:14.495 Removing: /dev/shm/spdk_tgt_trace.pid474930 00:09:14.495 Removing: /var/run/dpdk/spdk_pid472446 00:09:14.495 Removing: /var/run/dpdk/spdk_pid473627 00:09:14.495 Removing: /var/run/dpdk/spdk_pid474930 00:09:14.495 Removing: /var/run/dpdk/spdk_pid475469 00:09:14.495 Removing: /var/run/dpdk/spdk_pid476196 00:09:14.495 Removing: /var/run/dpdk/spdk_pid476386 00:09:14.495 Removing: /var/run/dpdk/spdk_pid477141 00:09:14.495 Removing: /var/run/dpdk/spdk_pid477319 00:09:14.495 Removing: /var/run/dpdk/spdk_pid477664 00:09:14.495 Removing: /var/run/dpdk/spdk_pid477905 00:09:14.495 Removing: /var/run/dpdk/spdk_pid478295 00:09:14.495 Removing: /var/run/dpdk/spdk_pid478560 00:09:14.495 Removing: /var/run/dpdk/spdk_pid478814 00:09:14.495 Removing: /var/run/dpdk/spdk_pid479009 00:09:14.495 Removing: /var/run/dpdk/spdk_pid479209 00:09:14.495 Removing: /var/run/dpdk/spdk_pid479438 00:09:14.495 Removing: /var/run/dpdk/spdk_pid480184 00:09:14.495 Removing: /var/run/dpdk/spdk_pid482695 00:09:14.495 Removing: /var/run/dpdk/spdk_pid482904 00:09:14.495 Removing: /var/run/dpdk/spdk_pid483115 00:09:14.495 Removing: /var/run/dpdk/spdk_pid483223 00:09:14.495 Removing: /var/run/dpdk/spdk_pid483677 00:09:14.495 Removing: /var/run/dpdk/spdk_pid483695 00:09:14.495 Removing: /var/run/dpdk/spdk_pid484079 00:09:14.495 Removing: /var/run/dpdk/spdk_pid484254 00:09:14.495 Removing: /var/run/dpdk/spdk_pid484465 00:09:14.495 Removing: /var/run/dpdk/spdk_pid484642 00:09:14.495 Removing: /var/run/dpdk/spdk_pid484846 00:09:14.495 Removing: /var/run/dpdk/spdk_pid484908 00:09:14.495 Removing: /var/run/dpdk/spdk_pid485317 00:09:14.495 Removing: /var/run/dpdk/spdk_pid485521 00:09:14.495 Removing: /var/run/dpdk/spdk_pid485744 00:09:14.495 Removing: /var/run/dpdk/spdk_pid485950 00:09:14.495 Removing: /var/run/dpdk/spdk_pid486542 00:09:14.495 Removing: /var/run/dpdk/spdk_pid486901 00:09:14.495 Removing: /var/run/dpdk/spdk_pid487270 00:09:14.495 Removing: /var/run/dpdk/spdk_pid487699 00:09:14.495 Removing: /var/run/dpdk/spdk_pid488126 00:09:14.495 Removing: /var/run/dpdk/spdk_pid488492 00:09:14.495 Removing: /var/run/dpdk/spdk_pid488846 00:09:14.495 Removing: /var/run/dpdk/spdk_pid489208 00:09:14.495 Removing: /var/run/dpdk/spdk_pid489569 00:09:14.495 Removing: /var/run/dpdk/spdk_pid489937 00:09:14.495 Removing: /var/run/dpdk/spdk_pid490300 00:09:14.495 Removing: /var/run/dpdk/spdk_pid490662 00:09:14.495 Removing: /var/run/dpdk/spdk_pid491039 00:09:14.495 Removing: /var/run/dpdk/spdk_pid491509 00:09:14.495 Removing: /var/run/dpdk/spdk_pid492192 00:09:14.495 Removing: /var/run/dpdk/spdk_pid492618 00:09:14.495 Removing: /var/run/dpdk/spdk_pid492980 00:09:14.495 Removing: /var/run/dpdk/spdk_pid493334 00:09:14.495 Removing: /var/run/dpdk/spdk_pid493700 00:09:14.495 Removing: /var/run/dpdk/spdk_pid494061 00:09:14.495 Removing: /var/run/dpdk/spdk_pid494417 00:09:14.495 Removing: /var/run/dpdk/spdk_pid494776 00:09:14.495 Removing: /var/run/dpdk/spdk_pid495138 00:09:14.495 Removing: /var/run/dpdk/spdk_pid495491 00:09:14.495 Removing: /var/run/dpdk/spdk_pid495854 00:09:14.495 Removing: /var/run/dpdk/spdk_pid496417 00:09:14.495 Removing: /var/run/dpdk/spdk_pid496825 00:09:14.495 Removing: /var/run/dpdk/spdk_pid497185 00:09:14.495 Removing: /var/run/dpdk/spdk_pid497546 00:09:14.495 Removing: /var/run/dpdk/spdk_pid497905 00:09:14.495 Removing: /var/run/dpdk/spdk_pid498270 00:09:14.495 Removing: /var/run/dpdk/spdk_pid498639 00:09:14.495 Clean 00:09:14.495 09:28:10 -- common/autotest_common.sh@1451 -- # return 0 00:09:14.495 09:28:10 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:09:14.495 09:28:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.495 09:28:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.755 09:28:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:09:14.755 09:28:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.755 09:28:10 -- common/autotest_common.sh@10 -- # set +x 00:09:14.755 09:28:10 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:14.755 09:28:10 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:14.755 09:28:10 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:14.755 09:28:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:09:14.755 09:28:10 -- spdk/autotest.sh@394 -- # hostname 00:09:14.755 09:28:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-39 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:09:15.014 geninfo: WARNING: invalid characters removed from testname! 00:09:20.290 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:09:25.567 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:09:27.477 09:28:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:35.601 09:28:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:40.876 09:28:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:46.184 09:28:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:51.458 09:28:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:56.820 09:28:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:02.099 09:28:56 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:10:02.099 09:28:57 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:10:02.099 09:28:57 -- common/autotest_common.sh@1681 -- $ lcov --version 00:10:02.099 09:28:57 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:10:02.099 09:28:57 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:10:02.099 09:28:57 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:10:02.099 09:28:57 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:10:02.099 09:28:57 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:10:02.099 09:28:57 -- scripts/common.sh@336 -- $ IFS=.-: 00:10:02.099 09:28:57 -- scripts/common.sh@336 -- $ read -ra ver1 00:10:02.099 09:28:57 -- scripts/common.sh@337 -- $ IFS=.-: 00:10:02.099 09:28:57 -- scripts/common.sh@337 -- $ read -ra ver2 00:10:02.099 09:28:57 -- scripts/common.sh@338 -- $ local 'op=<' 00:10:02.099 09:28:57 -- scripts/common.sh@340 -- $ ver1_l=2 00:10:02.099 09:28:57 -- scripts/common.sh@341 -- $ ver2_l=1 00:10:02.099 09:28:57 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:10:02.099 09:28:57 -- scripts/common.sh@344 -- $ case "$op" in 00:10:02.099 09:28:57 -- scripts/common.sh@345 -- $ : 1 00:10:02.099 09:28:57 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:10:02.099 09:28:57 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.099 09:28:57 -- scripts/common.sh@365 -- $ decimal 1 00:10:02.099 09:28:57 -- scripts/common.sh@353 -- $ local d=1 00:10:02.099 09:28:57 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:10:02.099 09:28:57 -- scripts/common.sh@355 -- $ echo 1 00:10:02.099 09:28:57 -- scripts/common.sh@365 -- $ ver1[v]=1 00:10:02.099 09:28:57 -- scripts/common.sh@366 -- $ decimal 2 00:10:02.099 09:28:57 -- scripts/common.sh@353 -- $ local d=2 00:10:02.099 09:28:57 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:10:02.099 09:28:57 -- scripts/common.sh@355 -- $ echo 2 00:10:02.099 09:28:57 -- scripts/common.sh@366 -- $ ver2[v]=2 00:10:02.099 09:28:57 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:10:02.099 09:28:57 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:10:02.099 09:28:57 -- scripts/common.sh@368 -- $ return 0 00:10:02.099 09:28:57 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.099 09:28:57 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:10:02.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.099 --rc genhtml_branch_coverage=1 00:10:02.099 --rc genhtml_function_coverage=1 00:10:02.099 --rc genhtml_legend=1 00:10:02.099 --rc geninfo_all_blocks=1 00:10:02.099 --rc geninfo_unexecuted_blocks=1 00:10:02.099 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:02.099 ' 00:10:02.099 09:28:57 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:10:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.100 --rc genhtml_branch_coverage=1 00:10:02.100 --rc genhtml_function_coverage=1 00:10:02.100 --rc genhtml_legend=1 00:10:02.100 --rc geninfo_all_blocks=1 00:10:02.100 --rc geninfo_unexecuted_blocks=1 00:10:02.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:02.100 ' 00:10:02.100 09:28:57 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:10:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.100 --rc genhtml_branch_coverage=1 00:10:02.100 --rc genhtml_function_coverage=1 00:10:02.100 --rc genhtml_legend=1 00:10:02.100 --rc geninfo_all_blocks=1 00:10:02.100 --rc geninfo_unexecuted_blocks=1 00:10:02.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:02.100 ' 00:10:02.100 09:28:57 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:10:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.100 --rc genhtml_branch_coverage=1 00:10:02.100 --rc genhtml_function_coverage=1 00:10:02.100 --rc genhtml_legend=1 00:10:02.100 --rc geninfo_all_blocks=1 00:10:02.100 --rc geninfo_unexecuted_blocks=1 00:10:02.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:02.100 ' 00:10:02.100 09:28:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:02.100 09:28:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:10:02.100 09:28:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:02.100 09:28:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.100 09:28:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.100 09:28:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.100 09:28:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.100 09:28:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.100 09:28:57 -- paths/export.sh@5 -- $ export PATH 00:10:02.100 09:28:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.100 09:28:57 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:02.100 09:28:57 -- common/autobuild_common.sh@486 -- $ date +%s 00:10:02.100 09:28:57 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728286137.XXXXXX 00:10:02.100 09:28:57 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728286137.vqUYkA 00:10:02.100 09:28:57 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:10:02.100 09:28:57 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:10:02.100 09:28:57 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:10:02.100 09:28:57 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:02.100 09:28:57 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:02.100 09:28:57 -- common/autobuild_common.sh@502 -- $ get_config_params 00:10:02.100 09:28:57 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:10:02.100 09:28:57 -- common/autotest_common.sh@10 -- $ set +x 00:10:02.100 09:28:57 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:10:02.100 09:28:57 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:10:02.100 09:28:57 -- pm/common@17 -- $ local monitor 00:10:02.100 09:28:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.100 09:28:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.100 09:28:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.100 09:28:57 -- pm/common@21 -- $ date +%s 00:10:02.100 09:28:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.100 09:28:57 -- pm/common@21 -- $ date +%s 00:10:02.100 09:28:57 -- pm/common@21 -- $ date +%s 00:10:02.100 09:28:57 -- pm/common@25 -- $ sleep 1 00:10:02.100 09:28:57 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728286137 00:10:02.100 09:28:57 -- pm/common@21 -- $ date +%s 00:10:02.100 09:28:57 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728286137 00:10:02.100 09:28:57 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728286137 00:10:02.100 09:28:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728286137 00:10:02.100 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728286137_collect-cpu-load.pm.log 00:10:02.100 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728286137_collect-vmstat.pm.log 00:10:02.100 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728286137_collect-cpu-temp.pm.log 00:10:02.100 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728286137_collect-bmc-pm.bmc.pm.log 00:10:02.671 09:28:58 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:10:02.671 09:28:58 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:10:02.671 09:28:58 -- spdk/autopackage.sh@14 -- $ timing_finish 00:10:02.671 09:28:58 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:02.671 09:28:58 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:10:02.671 09:28:58 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:02.931 09:28:58 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:10:02.931 09:28:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:02.931 09:28:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:02.931 09:28:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.931 09:28:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:02.931 09:28:58 -- pm/common@44 -- $ pid=505560 00:10:02.931 09:28:58 -- pm/common@50 -- $ kill -TERM 505560 00:10:02.931 09:28:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.931 09:28:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:02.931 09:28:58 -- pm/common@44 -- $ pid=505562 00:10:02.931 09:28:58 -- pm/common@50 -- $ kill -TERM 505562 00:10:02.931 09:28:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.931 09:28:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:02.931 09:28:58 -- pm/common@44 -- $ pid=505564 00:10:02.931 09:28:58 -- pm/common@50 -- $ kill -TERM 505564 00:10:02.931 09:28:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:02.931 09:28:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:02.931 09:28:58 -- pm/common@44 -- $ pid=505586 00:10:02.931 09:28:58 -- pm/common@50 -- $ sudo -E kill -TERM 505586 00:10:02.931 + [[ -n 365351 ]] 00:10:02.931 + sudo kill 365351 00:10:02.941 [Pipeline] } 00:10:02.956 [Pipeline] // stage 00:10:02.962 [Pipeline] } 00:10:02.979 [Pipeline] // timeout 00:10:02.985 [Pipeline] } 00:10:02.998 [Pipeline] // catchError 00:10:03.002 [Pipeline] } 00:10:03.018 [Pipeline] // wrap 00:10:03.024 [Pipeline] } 00:10:03.035 [Pipeline] // catchError 00:10:03.044 [Pipeline] stage 00:10:03.046 [Pipeline] { (Epilogue) 00:10:03.060 [Pipeline] catchError 00:10:03.061 [Pipeline] { 00:10:03.075 [Pipeline] echo 00:10:03.077 Cleanup processes 00:10:03.082 [Pipeline] sh 00:10:03.367 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:03.367 505715 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:10:03.367 505957 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:03.382 [Pipeline] sh 00:10:03.665 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:03.665 ++ grep -v 'sudo pgrep' 00:10:03.665 ++ awk '{print $1}' 00:10:03.665 + sudo kill -9 505715 00:10:03.676 [Pipeline] sh 00:10:03.956 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:16.182 [Pipeline] sh 00:10:16.467 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:16.467 Artifacts sizes are good 00:10:16.481 [Pipeline] archiveArtifacts 00:10:16.488 Archiving artifacts 00:10:16.656 [Pipeline] sh 00:10:16.941 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:16.955 [Pipeline] cleanWs 00:10:17.049 [WS-CLEANUP] Deleting project workspace... 00:10:17.049 [WS-CLEANUP] Deferred wipeout is used... 00:10:17.056 [WS-CLEANUP] done 00:10:17.058 [Pipeline] } 00:10:17.075 [Pipeline] // catchError 00:10:17.087 [Pipeline] sh 00:10:17.370 + logger -p user.info -t JENKINS-CI 00:10:17.379 [Pipeline] } 00:10:17.395 [Pipeline] // stage 00:10:17.401 [Pipeline] } 00:10:17.419 [Pipeline] // node 00:10:17.425 [Pipeline] End of Pipeline 00:10:17.465 Finished: SUCCESS